00:00:00.001 Started by upstream project "autotest-per-patch" build number 127092 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.048 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.106 > git --version # 'git version 2.39.2' 00:00:00.106 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.232 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.244 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.257 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:03.257 > git config core.sparsecheckout # timeout=10 00:00:03.270 > git read-tree -mu HEAD # timeout=10 00:00:03.287 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:03.317 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:03.317 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:03.426 [Pipeline] Start of Pipeline 00:00:03.439 [Pipeline] library 00:00:03.441 Loading library shm_lib@master 00:00:03.441 Library shm_lib@master is cached. Copying from home. 00:00:03.459 [Pipeline] node 00:00:03.471 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:03.473 [Pipeline] { 00:00:03.482 [Pipeline] catchError 00:00:03.483 [Pipeline] { 00:00:03.497 [Pipeline] wrap 00:00:03.510 [Pipeline] { 00:00:03.517 [Pipeline] stage 00:00:03.518 [Pipeline] { (Prologue) 00:00:03.540 [Pipeline] echo 00:00:03.542 Node: VM-host-SM4 00:00:03.548 [Pipeline] cleanWs 00:00:03.556 [WS-CLEANUP] Deleting project workspace... 00:00:03.556 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.563 [WS-CLEANUP] done 00:00:03.723 [Pipeline] setCustomBuildProperty 00:00:03.823 [Pipeline] httpRequest 00:00:03.849 [Pipeline] echo 00:00:03.850 Sorcerer 10.211.164.101 is alive 00:00:03.856 [Pipeline] httpRequest 00:00:03.859 HttpMethod: GET 00:00:03.859 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:03.861 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:03.862 Response Code: HTTP/1.1 200 OK 00:00:03.863 Success: Status code 200 is in the accepted range: 200,404 00:00:03.863 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.281 [Pipeline] sh 00:00:04.559 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.575 [Pipeline] httpRequest 00:00:04.591 [Pipeline] echo 00:00:04.593 Sorcerer 10.211.164.101 is alive 00:00:04.598 [Pipeline] httpRequest 00:00:04.602 HttpMethod: GET 00:00:04.603 URL: http://10.211.164.101/packages/spdk_03a38592aad331b65e6bc565573e6f7710f994be.tar.gz 00:00:04.603 Sending request to url: http://10.211.164.101/packages/spdk_03a38592aad331b65e6bc565573e6f7710f994be.tar.gz 00:00:04.604 Response Code: HTTP/1.1 200 OK 00:00:04.604 Success: Status code 200 is in the accepted range: 200,404 00:00:04.605 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_03a38592aad331b65e6bc565573e6f7710f994be.tar.gz 00:00:17.014 [Pipeline] sh 00:00:17.299 + tar --no-same-owner -xf spdk_03a38592aad331b65e6bc565573e6f7710f994be.tar.gz 00:00:20.596 [Pipeline] sh 00:00:20.878 + git -C spdk log --oneline -n5 00:00:20.878 03a38592a raid: clear base bdev configure_cb after executing 00:00:20.878 74f92fe69 raid: complete bdev_raid_create after sb is written 00:00:20.878 d005e023b raid: fix empty slot not updated in sb after resize 00:00:20.878 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:20.878 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:00:20.897 [Pipeline] writeFile 00:00:20.914 [Pipeline] sh 00:00:21.195 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:21.207 [Pipeline] sh 00:00:21.490 + cat autorun-spdk.conf 00:00:21.490 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:21.490 SPDK_TEST_NVMF=1 00:00:21.490 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:21.490 SPDK_TEST_USDT=1 00:00:21.490 SPDK_TEST_NVMF_MDNS=1 00:00:21.490 SPDK_RUN_UBSAN=1 00:00:21.490 NET_TYPE=virt 00:00:21.490 SPDK_JSONRPC_GO_CLIENT=1 00:00:21.490 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:21.496 RUN_NIGHTLY=0 00:00:21.499 [Pipeline] } 00:00:21.517 [Pipeline] // stage 00:00:21.531 [Pipeline] stage 00:00:21.533 [Pipeline] { (Run VM) 00:00:21.547 [Pipeline] sh 00:00:21.853 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:21.853 + echo 'Start stage prepare_nvme.sh' 00:00:21.853 Start stage prepare_nvme.sh 00:00:21.853 + [[ -n 8 ]] 00:00:21.853 + disk_prefix=ex8 00:00:21.853 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:21.853 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:21.853 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:21.853 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:21.853 ++ SPDK_TEST_NVMF=1 00:00:21.853 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:21.853 ++ SPDK_TEST_USDT=1 00:00:21.853 ++ SPDK_TEST_NVMF_MDNS=1 00:00:21.853 ++ SPDK_RUN_UBSAN=1 00:00:21.853 ++ NET_TYPE=virt 00:00:21.853 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:21.853 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:21.853 ++ RUN_NIGHTLY=0 00:00:21.853 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:21.853 + nvme_files=() 00:00:21.853 + declare -A nvme_files 00:00:21.853 + backend_dir=/var/lib/libvirt/images/backends 00:00:21.853 + nvme_files['nvme.img']=5G 00:00:21.853 + nvme_files['nvme-cmb.img']=5G 00:00:21.853 + nvme_files['nvme-multi0.img']=4G 00:00:21.853 + nvme_files['nvme-multi1.img']=4G 00:00:21.853 + nvme_files['nvme-multi2.img']=4G 00:00:21.853 + nvme_files['nvme-openstack.img']=8G 00:00:21.853 + nvme_files['nvme-zns.img']=5G 00:00:21.853 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:21.853 + (( SPDK_TEST_FTL == 1 )) 00:00:21.853 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:21.853 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:21.853 + for nvme in "${!nvme_files[@]}" 00:00:21.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:00:21.853 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:21.853 + for nvme in "${!nvme_files[@]}" 00:00:21.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:00:21.853 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:21.853 + for nvme in "${!nvme_files[@]}" 00:00:21.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:00:22.130 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:22.130 + for nvme in "${!nvme_files[@]}" 00:00:22.130 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:00:22.130 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:22.131 + for nvme in "${!nvme_files[@]}" 00:00:22.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:00:22.131 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:22.131 + for nvme in "${!nvme_files[@]}" 00:00:22.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:00:22.389 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:22.389 + for nvme in "${!nvme_files[@]}" 00:00:22.389 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:00:22.648 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:22.648 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:00:22.648 + echo 'End stage prepare_nvme.sh' 00:00:22.648 End stage prepare_nvme.sh 00:00:22.659 [Pipeline] sh 00:00:22.941 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:22.941 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -H -a -v -f fedora38 00:00:22.941 00:00:22.941 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:22.941 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:22.941 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:22.941 HELP=0 00:00:22.941 DRY_RUN=0 00:00:22.941 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img, 00:00:22.941 NVME_DISKS_TYPE=nvme,nvme, 00:00:22.941 NVME_AUTO_CREATE=0 00:00:22.941 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img, 00:00:22.941 NVME_CMB=,, 00:00:22.941 NVME_PMR=,, 00:00:22.941 NVME_ZNS=,, 00:00:22.941 NVME_MS=,, 00:00:22.941 NVME_FDP=,, 00:00:22.941 SPDK_VAGRANT_DISTRO=fedora38 00:00:22.941 SPDK_VAGRANT_VMCPU=10 00:00:22.941 SPDK_VAGRANT_VMRAM=12288 00:00:22.941 SPDK_VAGRANT_PROVIDER=libvirt 00:00:22.941 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:22.941 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:22.941 SPDK_OPENSTACK_NETWORK=0 00:00:22.941 VAGRANT_PACKAGE_BOX=0 00:00:22.941 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:22.941 FORCE_DISTRO=true 00:00:22.941 VAGRANT_BOX_VERSION= 00:00:22.941 EXTRA_VAGRANTFILES= 00:00:22.941 NIC_MODEL=e1000 00:00:22.941 00:00:22.941 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:22.941 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:26.222 Bringing machine 'default' up with 'libvirt' provider... 00:00:26.788 ==> default: Creating image (snapshot of base box volume). 00:00:26.788 ==> default: Creating domain with the following settings... 00:00:26.788 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721843253_72e4dfbe355f6fdf184e 00:00:26.788 ==> default: -- Domain type: kvm 00:00:26.788 ==> default: -- Cpus: 10 00:00:26.788 ==> default: -- Feature: acpi 00:00:26.788 ==> default: -- Feature: apic 00:00:26.788 ==> default: -- Feature: pae 00:00:26.788 ==> default: -- Memory: 12288M 00:00:26.788 ==> default: -- Memory Backing: hugepages: 00:00:26.788 ==> default: -- Management MAC: 00:00:26.788 ==> default: -- Loader: 00:00:26.788 ==> default: -- Nvram: 00:00:26.788 ==> default: -- Base box: spdk/fedora38 00:00:26.788 ==> default: -- Storage pool: default 00:00:26.788 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721843253_72e4dfbe355f6fdf184e.img (20G) 00:00:26.788 ==> default: -- Volume Cache: default 00:00:26.788 ==> default: -- Kernel: 00:00:26.788 ==> default: -- Initrd: 00:00:26.788 ==> default: -- Graphics Type: vnc 00:00:26.788 ==> default: -- Graphics Port: -1 00:00:26.788 ==> default: -- Graphics IP: 127.0.0.1 00:00:26.788 ==> default: -- Graphics Password: Not defined 00:00:26.788 ==> default: -- Video Type: cirrus 00:00:26.788 ==> default: -- Video VRAM: 9216 00:00:26.788 ==> default: -- Sound Type: 00:00:26.788 ==> default: -- Keymap: en-us 00:00:26.788 ==> default: -- TPM Path: 00:00:26.788 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:26.788 ==> default: -- Command line args: 00:00:26.788 ==> default: -> value=-device, 00:00:26.788 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:26.788 ==> default: -> value=-drive, 00:00:26.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:00:26.788 ==> default: -> value=-device, 00:00:26.788 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:26.788 ==> default: -> value=-device, 00:00:26.788 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:26.788 ==> default: -> value=-drive, 00:00:26.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:26.788 ==> default: -> value=-device, 00:00:26.788 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:26.788 ==> default: -> value=-drive, 00:00:26.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:26.788 ==> default: -> value=-device, 00:00:26.788 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:26.788 ==> default: -> value=-drive, 00:00:26.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:26.788 ==> default: -> value=-device, 00:00:26.788 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:27.046 ==> default: Creating shared folders metadata... 00:00:27.046 ==> default: Starting domain. 00:00:28.947 ==> default: Waiting for domain to get an IP address... 00:00:47.048 ==> default: Waiting for SSH to become available... 00:00:48.418 ==> default: Configuring and enabling network interfaces... 00:00:53.685 default: SSH address: 192.168.121.100:22 00:00:53.685 default: SSH username: vagrant 00:00:53.685 default: SSH auth method: private key 00:00:55.629 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:03.767 ==> default: Mounting SSHFS shared folder... 00:01:05.672 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:05.672 ==> default: Checking Mount.. 00:01:06.605 ==> default: Folder Successfully Mounted! 00:01:06.605 ==> default: Running provisioner: file... 00:01:07.541 default: ~/.gitconfig => .gitconfig 00:01:08.108 00:01:08.108 SUCCESS! 00:01:08.108 00:01:08.108 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:08.108 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:08.109 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:08.109 00:01:08.116 [Pipeline] } 00:01:08.139 [Pipeline] // stage 00:01:08.148 [Pipeline] dir 00:01:08.148 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:08.149 [Pipeline] { 00:01:08.159 [Pipeline] catchError 00:01:08.160 [Pipeline] { 00:01:08.174 [Pipeline] sh 00:01:08.457 + vagrant ssh-config --host vagrant 00:01:08.457 + sed -ne /^Host/,$p 00:01:08.457 + tee ssh_conf 00:01:12.644 Host vagrant 00:01:12.644 HostName 192.168.121.100 00:01:12.644 User vagrant 00:01:12.644 Port 22 00:01:12.644 UserKnownHostsFile /dev/null 00:01:12.644 StrictHostKeyChecking no 00:01:12.644 PasswordAuthentication no 00:01:12.644 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:12.644 IdentitiesOnly yes 00:01:12.644 LogLevel FATAL 00:01:12.644 ForwardAgent yes 00:01:12.644 ForwardX11 yes 00:01:12.644 00:01:12.659 [Pipeline] withEnv 00:01:12.661 [Pipeline] { 00:01:12.677 [Pipeline] sh 00:01:12.952 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:12.952 source /etc/os-release 00:01:12.952 [[ -e /image.version ]] && img=$(< /image.version) 00:01:12.952 # Minimal, systemd-like check. 00:01:12.952 if [[ -e /.dockerenv ]]; then 00:01:12.952 # Clear garbage from the node's name: 00:01:12.952 # agt-er_autotest_547-896 -> autotest_547-896 00:01:12.952 # $HOSTNAME is the actual container id 00:01:12.952 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:12.952 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:12.952 # We can assume this is a mount from a host where container is running, 00:01:12.952 # so fetch its hostname to easily identify the target swarm worker. 00:01:12.952 container="$(< /etc/hostname) ($agent)" 00:01:12.952 else 00:01:12.952 # Fallback 00:01:12.952 container=$agent 00:01:12.952 fi 00:01:12.952 fi 00:01:12.952 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:12.952 00:01:13.031 [Pipeline] } 00:01:13.054 [Pipeline] // withEnv 00:01:13.065 [Pipeline] setCustomBuildProperty 00:01:13.083 [Pipeline] stage 00:01:13.086 [Pipeline] { (Tests) 00:01:13.108 [Pipeline] sh 00:01:13.385 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:13.660 [Pipeline] sh 00:01:13.941 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:14.215 [Pipeline] timeout 00:01:14.215 Timeout set to expire in 40 min 00:01:14.217 [Pipeline] { 00:01:14.232 [Pipeline] sh 00:01:14.512 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:15.077 HEAD is now at 03a38592a raid: clear base bdev configure_cb after executing 00:01:15.090 [Pipeline] sh 00:01:15.434 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:15.447 [Pipeline] sh 00:01:15.725 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:15.996 [Pipeline] sh 00:01:16.275 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:16.275 ++ readlink -f spdk_repo 00:01:16.275 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:16.275 + [[ -n /home/vagrant/spdk_repo ]] 00:01:16.276 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:16.276 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:16.276 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:16.276 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:16.276 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:16.276 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:16.276 + cd /home/vagrant/spdk_repo 00:01:16.276 + source /etc/os-release 00:01:16.276 ++ NAME='Fedora Linux' 00:01:16.276 ++ VERSION='38 (Cloud Edition)' 00:01:16.276 ++ ID=fedora 00:01:16.276 ++ VERSION_ID=38 00:01:16.276 ++ VERSION_CODENAME= 00:01:16.276 ++ PLATFORM_ID=platform:f38 00:01:16.276 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:16.276 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.276 ++ LOGO=fedora-logo-icon 00:01:16.276 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:16.276 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.276 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:16.276 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.276 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.276 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.276 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:16.276 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.276 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:16.276 ++ SUPPORT_END=2024-05-14 00:01:16.276 ++ VARIANT='Cloud Edition' 00:01:16.276 ++ VARIANT_ID=cloud 00:01:16.276 + uname -a 00:01:16.276 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:16.276 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:16.842 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:16.842 Hugepages 00:01:16.842 node hugesize free / total 00:01:16.842 node0 1048576kB 0 / 0 00:01:16.842 node0 2048kB 0 / 0 00:01:16.842 00:01:16.842 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.842 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:16.842 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:16.842 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:16.842 + rm -f /tmp/spdk-ld-path 00:01:16.842 + source autorun-spdk.conf 00:01:16.842 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.842 ++ SPDK_TEST_NVMF=1 00:01:16.842 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.842 ++ SPDK_TEST_USDT=1 00:01:16.842 ++ SPDK_TEST_NVMF_MDNS=1 00:01:16.842 ++ SPDK_RUN_UBSAN=1 00:01:16.842 ++ NET_TYPE=virt 00:01:16.842 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:16.842 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.842 ++ RUN_NIGHTLY=0 00:01:16.842 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.842 + [[ -n '' ]] 00:01:16.842 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:16.842 + for M in /var/spdk/build-*-manifest.txt 00:01:16.842 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.842 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.842 + for M in /var/spdk/build-*-manifest.txt 00:01:16.842 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.842 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.842 ++ uname 00:01:16.842 + [[ Linux == \L\i\n\u\x ]] 00:01:16.842 + sudo dmesg -T 00:01:17.101 + sudo dmesg --clear 00:01:17.101 + dmesg_pid=5155 00:01:17.101 + sudo dmesg -Tw 00:01:17.101 + [[ Fedora Linux == FreeBSD ]] 00:01:17.101 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.101 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.101 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.101 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.101 + export FIO_BIN=/usr/src/fio-static/fio 00:01:17.101 + FIO_BIN=/usr/src/fio-static/fio 00:01:17.101 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.101 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.101 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.101 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.101 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.101 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.101 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.101 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.101 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.101 Test configuration: 00:01:17.101 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.101 SPDK_TEST_NVMF=1 00:01:17.101 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.101 SPDK_TEST_USDT=1 00:01:17.101 SPDK_TEST_NVMF_MDNS=1 00:01:17.101 SPDK_RUN_UBSAN=1 00:01:17.101 NET_TYPE=virt 00:01:17.101 SPDK_JSONRPC_GO_CLIENT=1 00:01:17.101 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.101 RUN_NIGHTLY=0 17:48:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:17.101 17:48:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.101 17:48:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.101 17:48:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.101 17:48:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.101 17:48:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.101 17:48:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.101 17:48:23 -- paths/export.sh@5 -- $ export PATH 00:01:17.101 17:48:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.101 17:48:23 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:17.101 17:48:23 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:17.101 17:48:23 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721843303.XXXXXX 00:01:17.101 17:48:23 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721843303.K0RgtJ 00:01:17.101 17:48:23 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:17.101 17:48:23 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:17.101 17:48:23 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:17.101 17:48:23 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:17.101 17:48:23 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.101 17:48:23 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:17.101 17:48:23 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:17.101 17:48:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.101 17:48:23 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:17.101 17:48:23 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:17.101 17:48:23 -- pm/common@17 -- $ local monitor 00:01:17.101 17:48:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.101 17:48:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.101 17:48:23 -- pm/common@25 -- $ sleep 1 00:01:17.101 17:48:23 -- pm/common@21 -- $ date +%s 00:01:17.101 17:48:23 -- pm/common@21 -- $ date +%s 00:01:17.101 17:48:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721843303 00:01:17.101 17:48:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721843303 00:01:17.101 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721843303_collect-vmstat.pm.log 00:01:17.101 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721843303_collect-cpu-load.pm.log 00:01:18.035 17:48:24 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:18.035 17:48:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:18.035 17:48:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:18.035 17:48:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:18.035 17:48:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:18.035 Wed Jul 24 05:48:24 PM UTC 2024 00:01:18.035 17:48:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:18.035 v24.09-pre-320-g03a38592a 00:01:18.035 17:48:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:18.035 17:48:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:18.035 17:48:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:18.035 17:48:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:18.035 17:48:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:18.035 17:48:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.035 ************************************ 00:01:18.035 START TEST ubsan 00:01:18.035 ************************************ 00:01:18.035 using ubsan 00:01:18.035 17:48:25 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:18.035 00:01:18.035 real 0m0.000s 00:01:18.035 user 0m0.000s 00:01:18.035 sys 0m0.000s 00:01:18.035 17:48:25 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:18.035 17:48:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:18.035 ************************************ 00:01:18.035 END TEST ubsan 00:01:18.035 ************************************ 00:01:18.293 17:48:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:18.293 17:48:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:18.293 17:48:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:18.293 17:48:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:18.293 17:48:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:18.293 17:48:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:18.293 17:48:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:18.293 17:48:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:18.293 17:48:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:18.293 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:18.293 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:18.861 Using 'verbs' RDMA provider 00:01:32.020 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:46.892 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:46.892 go version go1.21.1 linux/amd64 00:01:46.892 Creating mk/config.mk...done. 00:01:46.892 Creating mk/cc.flags.mk...done. 00:01:46.892 Type 'make' to build. 00:01:46.892 17:48:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:46.892 17:48:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:46.892 17:48:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:46.892 17:48:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.892 ************************************ 00:01:46.892 START TEST make 00:01:46.892 ************************************ 00:01:46.892 17:48:52 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:46.892 make[1]: Nothing to be done for 'all'. 00:01:59.155 The Meson build system 00:01:59.155 Version: 1.3.1 00:01:59.155 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:59.155 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:59.155 Build type: native build 00:01:59.155 Program cat found: YES (/usr/bin/cat) 00:01:59.155 Project name: DPDK 00:01:59.155 Project version: 24.03.0 00:01:59.155 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.155 C linker for the host machine: cc ld.bfd 2.39-16 00:01:59.155 Host machine cpu family: x86_64 00:01:59.155 Host machine cpu: x86_64 00:01:59.155 Message: ## Building in Developer Mode ## 00:01:59.155 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.155 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.155 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.155 Program python3 found: YES (/usr/bin/python3) 00:01:59.155 Program cat found: YES (/usr/bin/cat) 00:01:59.155 Compiler for C supports arguments -march=native: YES 00:01:59.155 Checking for size of "void *" : 8 00:01:59.155 Checking for size of "void *" : 8 (cached) 00:01:59.155 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:59.155 Library m found: YES 00:01:59.155 Library numa found: YES 00:01:59.155 Has header "numaif.h" : YES 00:01:59.155 Library fdt found: NO 00:01:59.155 Library execinfo found: NO 00:01:59.155 Has header "execinfo.h" : YES 00:01:59.155 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.155 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.155 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.155 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.155 Run-time dependency openssl found: YES 3.0.9 00:01:59.155 Run-time dependency libpcap found: YES 1.10.4 00:01:59.155 Has header "pcap.h" with dependency libpcap: YES 00:01:59.155 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.155 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.155 Compiler for C supports arguments -Wformat: YES 00:01:59.155 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.155 Compiler for C supports arguments -Wformat-security: NO 00:01:59.155 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.155 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.155 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.155 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.155 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.155 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.155 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.155 Compiler for C supports arguments -Wundef: YES 00:01:59.155 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.155 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.155 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.155 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.155 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.155 Program objdump found: YES (/usr/bin/objdump) 00:01:59.155 Compiler for C supports arguments -mavx512f: YES 00:01:59.155 Checking if "AVX512 checking" compiles: YES 00:01:59.155 Fetching value of define "__SSE4_2__" : 1 00:01:59.155 Fetching value of define "__AES__" : 1 00:01:59.156 Fetching value of define "__AVX__" : 1 00:01:59.156 Fetching value of define "__AVX2__" : 1 00:01:59.156 Fetching value of define "__AVX512BW__" : 1 00:01:59.156 Fetching value of define "__AVX512CD__" : 1 00:01:59.156 Fetching value of define "__AVX512DQ__" : 1 00:01:59.156 Fetching value of define "__AVX512F__" : 1 00:01:59.156 Fetching value of define "__AVX512VL__" : 1 00:01:59.156 Fetching value of define "__PCLMUL__" : 1 00:01:59.156 Fetching value of define "__RDRND__" : 1 00:01:59.156 Fetching value of define "__RDSEED__" : 1 00:01:59.156 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.156 Fetching value of define "__znver1__" : (undefined) 00:01:59.156 Fetching value of define "__znver2__" : (undefined) 00:01:59.156 Fetching value of define "__znver3__" : (undefined) 00:01:59.156 Fetching value of define "__znver4__" : (undefined) 00:01:59.156 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.156 Message: lib/log: Defining dependency "log" 00:01:59.156 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.156 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.156 Checking for function "getentropy" : NO 00:01:59.156 Message: lib/eal: Defining dependency "eal" 00:01:59.156 Message: lib/ring: Defining dependency "ring" 00:01:59.156 Message: lib/rcu: Defining dependency "rcu" 00:01:59.156 Message: lib/mempool: Defining dependency "mempool" 00:01:59.156 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.156 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.156 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.156 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.156 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.156 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.156 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:59.156 Compiler for C supports arguments -mpclmul: YES 00:01:59.156 Compiler for C supports arguments -maes: YES 00:01:59.156 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.156 Compiler for C supports arguments -mavx512bw: YES 00:01:59.156 Compiler for C supports arguments -mavx512dq: YES 00:01:59.156 Compiler for C supports arguments -mavx512vl: YES 00:01:59.156 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.156 Compiler for C supports arguments -mavx2: YES 00:01:59.156 Compiler for C supports arguments -mavx: YES 00:01:59.156 Message: lib/net: Defining dependency "net" 00:01:59.156 Message: lib/meter: Defining dependency "meter" 00:01:59.156 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.156 Message: lib/pci: Defining dependency "pci" 00:01:59.156 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.156 Message: lib/hash: Defining dependency "hash" 00:01:59.156 Message: lib/timer: Defining dependency "timer" 00:01:59.156 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.156 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.156 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.156 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.156 Message: lib/power: Defining dependency "power" 00:01:59.156 Message: lib/reorder: Defining dependency "reorder" 00:01:59.156 Message: lib/security: Defining dependency "security" 00:01:59.156 Has header "linux/userfaultfd.h" : YES 00:01:59.156 Has header "linux/vduse.h" : YES 00:01:59.156 Message: lib/vhost: Defining dependency "vhost" 00:01:59.156 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.156 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.156 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.156 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.156 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.156 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.156 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.156 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.156 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.156 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.156 Program doxygen found: YES (/usr/bin/doxygen) 00:01:59.156 Configuring doxy-api-html.conf using configuration 00:01:59.156 Configuring doxy-api-man.conf using configuration 00:01:59.156 Program mandb found: YES (/usr/bin/mandb) 00:01:59.156 Program sphinx-build found: NO 00:01:59.156 Configuring rte_build_config.h using configuration 00:01:59.156 Message: 00:01:59.156 ================= 00:01:59.156 Applications Enabled 00:01:59.156 ================= 00:01:59.156 00:01:59.156 apps: 00:01:59.156 00:01:59.156 00:01:59.156 Message: 00:01:59.156 ================= 00:01:59.156 Libraries Enabled 00:01:59.156 ================= 00:01:59.156 00:01:59.156 libs: 00:01:59.156 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.156 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.156 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.156 00:01:59.156 Message: 00:01:59.156 =============== 00:01:59.156 Drivers Enabled 00:01:59.156 =============== 00:01:59.156 00:01:59.156 common: 00:01:59.156 00:01:59.156 bus: 00:01:59.156 pci, vdev, 00:01:59.156 mempool: 00:01:59.156 ring, 00:01:59.156 dma: 00:01:59.156 00:01:59.156 net: 00:01:59.156 00:01:59.156 crypto: 00:01:59.156 00:01:59.156 compress: 00:01:59.156 00:01:59.156 vdpa: 00:01:59.156 00:01:59.156 00:01:59.156 Message: 00:01:59.156 ================= 00:01:59.156 Content Skipped 00:01:59.156 ================= 00:01:59.156 00:01:59.156 apps: 00:01:59.156 dumpcap: explicitly disabled via build config 00:01:59.156 graph: explicitly disabled via build config 00:01:59.156 pdump: explicitly disabled via build config 00:01:59.156 proc-info: explicitly disabled via build config 00:01:59.156 test-acl: explicitly disabled via build config 00:01:59.156 test-bbdev: explicitly disabled via build config 00:01:59.156 test-cmdline: explicitly disabled via build config 00:01:59.156 test-compress-perf: explicitly disabled via build config 00:01:59.156 test-crypto-perf: explicitly disabled via build config 00:01:59.156 test-dma-perf: explicitly disabled via build config 00:01:59.156 test-eventdev: explicitly disabled via build config 00:01:59.156 test-fib: explicitly disabled via build config 00:01:59.156 test-flow-perf: explicitly disabled via build config 00:01:59.156 test-gpudev: explicitly disabled via build config 00:01:59.156 test-mldev: explicitly disabled via build config 00:01:59.156 test-pipeline: explicitly disabled via build config 00:01:59.156 test-pmd: explicitly disabled via build config 00:01:59.156 test-regex: explicitly disabled via build config 00:01:59.156 test-sad: explicitly disabled via build config 00:01:59.156 test-security-perf: explicitly disabled via build config 00:01:59.156 00:01:59.156 libs: 00:01:59.156 argparse: explicitly disabled via build config 00:01:59.156 metrics: explicitly disabled via build config 00:01:59.156 acl: explicitly disabled via build config 00:01:59.156 bbdev: explicitly disabled via build config 00:01:59.156 bitratestats: explicitly disabled via build config 00:01:59.156 bpf: explicitly disabled via build config 00:01:59.156 cfgfile: explicitly disabled via build config 00:01:59.156 distributor: explicitly disabled via build config 00:01:59.156 efd: explicitly disabled via build config 00:01:59.156 eventdev: explicitly disabled via build config 00:01:59.156 dispatcher: explicitly disabled via build config 00:01:59.156 gpudev: explicitly disabled via build config 00:01:59.156 gro: explicitly disabled via build config 00:01:59.156 gso: explicitly disabled via build config 00:01:59.156 ip_frag: explicitly disabled via build config 00:01:59.156 jobstats: explicitly disabled via build config 00:01:59.156 latencystats: explicitly disabled via build config 00:01:59.156 lpm: explicitly disabled via build config 00:01:59.156 member: explicitly disabled via build config 00:01:59.156 pcapng: explicitly disabled via build config 00:01:59.156 rawdev: explicitly disabled via build config 00:01:59.156 regexdev: explicitly disabled via build config 00:01:59.156 mldev: explicitly disabled via build config 00:01:59.156 rib: explicitly disabled via build config 00:01:59.156 sched: explicitly disabled via build config 00:01:59.156 stack: explicitly disabled via build config 00:01:59.156 ipsec: explicitly disabled via build config 00:01:59.156 pdcp: explicitly disabled via build config 00:01:59.156 fib: explicitly disabled via build config 00:01:59.156 port: explicitly disabled via build config 00:01:59.156 pdump: explicitly disabled via build config 00:01:59.156 table: explicitly disabled via build config 00:01:59.156 pipeline: explicitly disabled via build config 00:01:59.156 graph: explicitly disabled via build config 00:01:59.156 node: explicitly disabled via build config 00:01:59.156 00:01:59.156 drivers: 00:01:59.156 common/cpt: not in enabled drivers build config 00:01:59.156 common/dpaax: not in enabled drivers build config 00:01:59.156 common/iavf: not in enabled drivers build config 00:01:59.156 common/idpf: not in enabled drivers build config 00:01:59.156 common/ionic: not in enabled drivers build config 00:01:59.156 common/mvep: not in enabled drivers build config 00:01:59.156 common/octeontx: not in enabled drivers build config 00:01:59.156 bus/auxiliary: not in enabled drivers build config 00:01:59.156 bus/cdx: not in enabled drivers build config 00:01:59.156 bus/dpaa: not in enabled drivers build config 00:01:59.156 bus/fslmc: not in enabled drivers build config 00:01:59.156 bus/ifpga: not in enabled drivers build config 00:01:59.156 bus/platform: not in enabled drivers build config 00:01:59.156 bus/uacce: not in enabled drivers build config 00:01:59.156 bus/vmbus: not in enabled drivers build config 00:01:59.156 common/cnxk: not in enabled drivers build config 00:01:59.156 common/mlx5: not in enabled drivers build config 00:01:59.156 common/nfp: not in enabled drivers build config 00:01:59.156 common/nitrox: not in enabled drivers build config 00:01:59.156 common/qat: not in enabled drivers build config 00:01:59.156 common/sfc_efx: not in enabled drivers build config 00:01:59.157 mempool/bucket: not in enabled drivers build config 00:01:59.157 mempool/cnxk: not in enabled drivers build config 00:01:59.157 mempool/dpaa: not in enabled drivers build config 00:01:59.157 mempool/dpaa2: not in enabled drivers build config 00:01:59.157 mempool/octeontx: not in enabled drivers build config 00:01:59.157 mempool/stack: not in enabled drivers build config 00:01:59.157 dma/cnxk: not in enabled drivers build config 00:01:59.157 dma/dpaa: not in enabled drivers build config 00:01:59.157 dma/dpaa2: not in enabled drivers build config 00:01:59.157 dma/hisilicon: not in enabled drivers build config 00:01:59.157 dma/idxd: not in enabled drivers build config 00:01:59.157 dma/ioat: not in enabled drivers build config 00:01:59.157 dma/skeleton: not in enabled drivers build config 00:01:59.157 net/af_packet: not in enabled drivers build config 00:01:59.157 net/af_xdp: not in enabled drivers build config 00:01:59.157 net/ark: not in enabled drivers build config 00:01:59.157 net/atlantic: not in enabled drivers build config 00:01:59.157 net/avp: not in enabled drivers build config 00:01:59.157 net/axgbe: not in enabled drivers build config 00:01:59.157 net/bnx2x: not in enabled drivers build config 00:01:59.157 net/bnxt: not in enabled drivers build config 00:01:59.157 net/bonding: not in enabled drivers build config 00:01:59.157 net/cnxk: not in enabled drivers build config 00:01:59.157 net/cpfl: not in enabled drivers build config 00:01:59.157 net/cxgbe: not in enabled drivers build config 00:01:59.157 net/dpaa: not in enabled drivers build config 00:01:59.157 net/dpaa2: not in enabled drivers build config 00:01:59.157 net/e1000: not in enabled drivers build config 00:01:59.157 net/ena: not in enabled drivers build config 00:01:59.157 net/enetc: not in enabled drivers build config 00:01:59.157 net/enetfec: not in enabled drivers build config 00:01:59.157 net/enic: not in enabled drivers build config 00:01:59.157 net/failsafe: not in enabled drivers build config 00:01:59.157 net/fm10k: not in enabled drivers build config 00:01:59.157 net/gve: not in enabled drivers build config 00:01:59.157 net/hinic: not in enabled drivers build config 00:01:59.157 net/hns3: not in enabled drivers build config 00:01:59.157 net/i40e: not in enabled drivers build config 00:01:59.157 net/iavf: not in enabled drivers build config 00:01:59.157 net/ice: not in enabled drivers build config 00:01:59.157 net/idpf: not in enabled drivers build config 00:01:59.157 net/igc: not in enabled drivers build config 00:01:59.157 net/ionic: not in enabled drivers build config 00:01:59.157 net/ipn3ke: not in enabled drivers build config 00:01:59.157 net/ixgbe: not in enabled drivers build config 00:01:59.157 net/mana: not in enabled drivers build config 00:01:59.157 net/memif: not in enabled drivers build config 00:01:59.157 net/mlx4: not in enabled drivers build config 00:01:59.157 net/mlx5: not in enabled drivers build config 00:01:59.157 net/mvneta: not in enabled drivers build config 00:01:59.157 net/mvpp2: not in enabled drivers build config 00:01:59.157 net/netvsc: not in enabled drivers build config 00:01:59.157 net/nfb: not in enabled drivers build config 00:01:59.157 net/nfp: not in enabled drivers build config 00:01:59.157 net/ngbe: not in enabled drivers build config 00:01:59.157 net/null: not in enabled drivers build config 00:01:59.157 net/octeontx: not in enabled drivers build config 00:01:59.157 net/octeon_ep: not in enabled drivers build config 00:01:59.157 net/pcap: not in enabled drivers build config 00:01:59.157 net/pfe: not in enabled drivers build config 00:01:59.157 net/qede: not in enabled drivers build config 00:01:59.157 net/ring: not in enabled drivers build config 00:01:59.157 net/sfc: not in enabled drivers build config 00:01:59.157 net/softnic: not in enabled drivers build config 00:01:59.157 net/tap: not in enabled drivers build config 00:01:59.157 net/thunderx: not in enabled drivers build config 00:01:59.157 net/txgbe: not in enabled drivers build config 00:01:59.157 net/vdev_netvsc: not in enabled drivers build config 00:01:59.157 net/vhost: not in enabled drivers build config 00:01:59.157 net/virtio: not in enabled drivers build config 00:01:59.157 net/vmxnet3: not in enabled drivers build config 00:01:59.157 raw/*: missing internal dependency, "rawdev" 00:01:59.157 crypto/armv8: not in enabled drivers build config 00:01:59.157 crypto/bcmfs: not in enabled drivers build config 00:01:59.157 crypto/caam_jr: not in enabled drivers build config 00:01:59.157 crypto/ccp: not in enabled drivers build config 00:01:59.157 crypto/cnxk: not in enabled drivers build config 00:01:59.157 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.157 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.157 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.157 crypto/mlx5: not in enabled drivers build config 00:01:59.157 crypto/mvsam: not in enabled drivers build config 00:01:59.157 crypto/nitrox: not in enabled drivers build config 00:01:59.157 crypto/null: not in enabled drivers build config 00:01:59.157 crypto/octeontx: not in enabled drivers build config 00:01:59.157 crypto/openssl: not in enabled drivers build config 00:01:59.157 crypto/scheduler: not in enabled drivers build config 00:01:59.157 crypto/uadk: not in enabled drivers build config 00:01:59.157 crypto/virtio: not in enabled drivers build config 00:01:59.157 compress/isal: not in enabled drivers build config 00:01:59.157 compress/mlx5: not in enabled drivers build config 00:01:59.157 compress/nitrox: not in enabled drivers build config 00:01:59.157 compress/octeontx: not in enabled drivers build config 00:01:59.157 compress/zlib: not in enabled drivers build config 00:01:59.157 regex/*: missing internal dependency, "regexdev" 00:01:59.157 ml/*: missing internal dependency, "mldev" 00:01:59.157 vdpa/ifc: not in enabled drivers build config 00:01:59.157 vdpa/mlx5: not in enabled drivers build config 00:01:59.157 vdpa/nfp: not in enabled drivers build config 00:01:59.157 vdpa/sfc: not in enabled drivers build config 00:01:59.157 event/*: missing internal dependency, "eventdev" 00:01:59.157 baseband/*: missing internal dependency, "bbdev" 00:01:59.157 gpu/*: missing internal dependency, "gpudev" 00:01:59.157 00:01:59.157 00:01:59.157 Build targets in project: 85 00:01:59.157 00:01:59.157 DPDK 24.03.0 00:01:59.157 00:01:59.157 User defined options 00:01:59.157 buildtype : debug 00:01:59.157 default_library : shared 00:01:59.157 libdir : lib 00:01:59.157 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.157 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.157 c_link_args : 00:01:59.157 cpu_instruction_set: native 00:01:59.157 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:59.157 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:59.157 enable_docs : false 00:01:59.157 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.157 enable_kmods : false 00:01:59.157 max_lcores : 128 00:01:59.157 tests : false 00:01:59.157 00:01:59.157 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.724 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:59.724 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.724 [2/268] Linking static target lib/librte_kvargs.a 00:01:59.724 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.724 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.724 [5/268] Linking static target lib/librte_log.a 00:01:59.724 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.329 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.329 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.329 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.329 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.329 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.329 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.329 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.587 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.587 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.587 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.587 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.587 [18/268] Linking static target lib/librte_telemetry.a 00:02:00.844 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.844 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.844 [21/268] Linking target lib/librte_log.so.24.1 00:02:01.411 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.411 [23/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.411 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.411 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.411 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.411 [27/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.411 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.411 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.669 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.669 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.669 [32/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.669 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.669 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.927 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.927 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.927 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.185 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.185 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.185 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.444 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.444 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.444 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.444 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.444 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.444 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.702 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.702 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.702 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.960 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.960 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.219 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.219 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.219 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.477 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.477 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.477 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.477 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.735 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.735 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.735 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.735 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.735 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.735 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.993 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.993 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.252 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.252 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.252 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.252 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.510 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.510 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.510 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.510 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.510 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.510 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.510 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.768 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.768 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.027 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.027 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.027 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.027 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.284 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.284 [85/268] Linking static target lib/librte_eal.a 00:02:05.542 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:05.542 [87/268] Linking static target lib/librte_ring.a 00:02:05.542 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:05.542 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.542 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:05.800 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:05.800 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.800 [93/268] Linking static target lib/librte_rcu.a 00:02:05.800 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.058 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.058 [96/268] Linking static target lib/librte_mempool.a 00:02:06.058 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.316 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:06.316 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.316 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.316 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:06.316 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.316 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.316 [104/268] Linking static target lib/librte_mbuf.a 00:02:06.573 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:06.573 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.832 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.091 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.091 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.091 [110/268] Linking static target lib/librte_net.a 00:02:07.091 [111/268] Linking static target lib/librte_meter.a 00:02:07.091 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.091 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.349 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.349 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.627 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.627 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.627 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.888 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.146 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.404 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.404 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.404 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.662 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.662 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.662 [126/268] Linking static target lib/librte_pci.a 00:02:08.920 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:08.920 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.178 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.178 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.178 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.178 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.178 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.178 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.178 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.436 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.436 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.436 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.436 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.436 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.436 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.436 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.436 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.694 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.694 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.694 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.694 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.952 [148/268] Linking static target lib/librte_cmdline.a 00:02:09.953 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.953 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.210 [151/268] Linking static target lib/librte_ethdev.a 00:02:10.210 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.210 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.210 [154/268] Linking static target lib/librte_timer.a 00:02:10.210 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:10.469 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.469 [157/268] Linking static target lib/librte_hash.a 00:02:10.469 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.469 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.727 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.727 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.727 [162/268] Linking static target lib/librte_compressdev.a 00:02:10.986 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.986 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.986 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.986 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.245 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.245 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.245 [169/268] Linking static target lib/librte_dmadev.a 00:02:11.503 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.761 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:11.761 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.761 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.019 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.019 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.019 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.019 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.276 [178/268] Linking static target lib/librte_cryptodev.a 00:02:12.276 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.276 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.276 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.534 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.534 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.534 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.534 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.534 [186/268] Linking static target lib/librte_power.a 00:02:12.792 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.792 [188/268] Linking static target lib/librte_reorder.a 00:02:13.051 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.051 [190/268] Linking static target lib/librte_security.a 00:02:13.051 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.309 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.309 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.567 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.567 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.133 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.133 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.133 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.391 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.391 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.391 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.957 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.957 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.957 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.957 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.957 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.216 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.216 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.216 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.216 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.216 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.473 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.474 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.474 [214/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.474 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.474 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.474 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.474 [218/268] Linking static target drivers/librte_bus_pci.a 00:02:15.474 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.474 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.474 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.731 [222/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.731 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.731 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.731 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.731 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.990 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.248 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.815 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.815 [230/268] Linking static target lib/librte_vhost.a 00:02:18.717 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.975 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.233 [233/268] Linking target lib/librte_eal.so.24.1 00:02:19.233 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:19.233 [235/268] Linking target lib/librte_pci.so.24.1 00:02:19.233 [236/268] Linking target lib/librte_timer.so.24.1 00:02:19.233 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:19.233 [238/268] Linking target lib/librte_ring.so.24.1 00:02:19.233 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:19.233 [240/268] Linking target lib/librte_meter.so.24.1 00:02:19.492 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:19.492 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:19.492 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:19.492 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:19.492 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:19.492 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:19.492 [247/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.492 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:19.492 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:19.751 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:19.751 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:19.751 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:19.751 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:20.009 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:20.009 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:20.009 [256/268] Linking target lib/librte_net.so.24.1 00:02:20.009 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:20.009 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:20.009 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:20.267 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:20.267 [261/268] Linking target lib/librte_hash.so.24.1 00:02:20.267 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:20.267 [263/268] Linking target lib/librte_security.so.24.1 00:02:20.267 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:20.267 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:20.525 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:20.525 [267/268] Linking target lib/librte_vhost.so.24.1 00:02:20.525 [268/268] Linking target lib/librte_power.so.24.1 00:02:20.525 INFO: autodetecting backend as ninja 00:02:20.525 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:21.901 CC lib/ut_mock/mock.o 00:02:21.901 CC lib/ut/ut.o 00:02:21.901 CC lib/log/log.o 00:02:21.901 CC lib/log/log_flags.o 00:02:21.901 CC lib/log/log_deprecated.o 00:02:22.159 LIB libspdk_ut_mock.a 00:02:22.159 LIB libspdk_ut.a 00:02:22.159 SO libspdk_ut_mock.so.6.0 00:02:22.159 SO libspdk_ut.so.2.0 00:02:22.159 LIB libspdk_log.a 00:02:22.159 SO libspdk_log.so.7.0 00:02:22.159 SYMLINK libspdk_ut.so 00:02:22.159 SYMLINK libspdk_ut_mock.so 00:02:22.159 SYMLINK libspdk_log.so 00:02:22.417 CC lib/dma/dma.o 00:02:22.417 CC lib/util/base64.o 00:02:22.417 CC lib/util/bit_array.o 00:02:22.703 CC lib/util/cpuset.o 00:02:22.703 CC lib/util/crc16.o 00:02:22.703 CC lib/util/crc32.o 00:02:22.703 CC lib/util/crc32c.o 00:02:22.703 CXX lib/trace_parser/trace.o 00:02:22.703 CC lib/ioat/ioat.o 00:02:22.703 CC lib/vfio_user/host/vfio_user_pci.o 00:02:22.703 CC lib/util/crc32_ieee.o 00:02:22.703 CC lib/util/crc64.o 00:02:22.703 CC lib/util/dif.o 00:02:22.703 CC lib/vfio_user/host/vfio_user.o 00:02:22.983 CC lib/util/fd.o 00:02:22.983 CC lib/util/fd_group.o 00:02:22.983 CC lib/util/file.o 00:02:22.983 CC lib/util/hexlify.o 00:02:22.983 LIB libspdk_ioat.a 00:02:22.983 SO libspdk_ioat.so.7.0 00:02:22.983 LIB libspdk_dma.a 00:02:22.983 CC lib/util/iov.o 00:02:22.983 CC lib/util/math.o 00:02:22.983 LIB libspdk_vfio_user.a 00:02:22.983 SO libspdk_dma.so.4.0 00:02:22.983 SO libspdk_vfio_user.so.5.0 00:02:22.983 SYMLINK libspdk_ioat.so 00:02:22.983 SYMLINK libspdk_dma.so 00:02:23.241 CC lib/util/net.o 00:02:23.241 CC lib/util/pipe.o 00:02:23.241 CC lib/util/strerror_tls.o 00:02:23.241 CC lib/util/string.o 00:02:23.241 SYMLINK libspdk_vfio_user.so 00:02:23.241 CC lib/util/uuid.o 00:02:23.241 CC lib/util/xor.o 00:02:23.241 CC lib/util/zipf.o 00:02:23.498 LIB libspdk_util.a 00:02:23.498 SO libspdk_util.so.10.0 00:02:23.756 LIB libspdk_trace_parser.a 00:02:23.756 SYMLINK libspdk_util.so 00:02:23.756 SO libspdk_trace_parser.so.5.0 00:02:24.013 SYMLINK libspdk_trace_parser.so 00:02:24.013 CC lib/rdma_utils/rdma_utils.o 00:02:24.013 CC lib/env_dpdk/env.o 00:02:24.013 CC lib/rdma_provider/common.o 00:02:24.013 CC lib/vmd/vmd.o 00:02:24.013 CC lib/env_dpdk/memory.o 00:02:24.013 CC lib/vmd/led.o 00:02:24.013 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:24.013 CC lib/conf/conf.o 00:02:24.013 CC lib/json/json_parse.o 00:02:24.013 CC lib/idxd/idxd.o 00:02:24.270 CC lib/idxd/idxd_user.o 00:02:24.270 CC lib/json/json_util.o 00:02:24.270 LIB libspdk_rdma_provider.a 00:02:24.270 LIB libspdk_conf.a 00:02:24.270 SO libspdk_rdma_provider.so.6.0 00:02:24.270 SO libspdk_conf.so.6.0 00:02:24.270 LIB libspdk_rdma_utils.a 00:02:24.270 SO libspdk_rdma_utils.so.1.0 00:02:24.527 CC lib/json/json_write.o 00:02:24.527 SYMLINK libspdk_conf.so 00:02:24.527 CC lib/idxd/idxd_kernel.o 00:02:24.527 SYMLINK libspdk_rdma_provider.so 00:02:24.527 CC lib/env_dpdk/pci.o 00:02:24.527 SYMLINK libspdk_rdma_utils.so 00:02:24.527 CC lib/env_dpdk/init.o 00:02:24.527 CC lib/env_dpdk/threads.o 00:02:24.527 CC lib/env_dpdk/pci_ioat.o 00:02:24.527 CC lib/env_dpdk/pci_virtio.o 00:02:24.527 LIB libspdk_idxd.a 00:02:24.784 LIB libspdk_vmd.a 00:02:24.784 SO libspdk_idxd.so.12.0 00:02:24.784 LIB libspdk_json.a 00:02:24.784 SO libspdk_vmd.so.6.0 00:02:24.784 CC lib/env_dpdk/pci_vmd.o 00:02:24.784 CC lib/env_dpdk/pci_idxd.o 00:02:24.784 CC lib/env_dpdk/pci_event.o 00:02:24.784 SO libspdk_json.so.6.0 00:02:24.784 SYMLINK libspdk_idxd.so 00:02:24.784 CC lib/env_dpdk/sigbus_handler.o 00:02:24.784 SYMLINK libspdk_vmd.so 00:02:24.784 CC lib/env_dpdk/pci_dpdk.o 00:02:24.784 SYMLINK libspdk_json.so 00:02:24.784 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:24.784 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:25.042 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:25.042 CC lib/jsonrpc/jsonrpc_server.o 00:02:25.042 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:25.042 CC lib/jsonrpc/jsonrpc_client.o 00:02:25.299 LIB libspdk_jsonrpc.a 00:02:25.557 SO libspdk_jsonrpc.so.6.0 00:02:25.557 SYMLINK libspdk_jsonrpc.so 00:02:25.815 LIB libspdk_env_dpdk.a 00:02:25.815 CC lib/rpc/rpc.o 00:02:25.815 SO libspdk_env_dpdk.so.15.0 00:02:26.073 SYMLINK libspdk_env_dpdk.so 00:02:26.073 LIB libspdk_rpc.a 00:02:26.073 SO libspdk_rpc.so.6.0 00:02:26.073 SYMLINK libspdk_rpc.so 00:02:26.331 CC lib/notify/notify.o 00:02:26.331 CC lib/notify/notify_rpc.o 00:02:26.331 CC lib/keyring/keyring.o 00:02:26.331 CC lib/keyring/keyring_rpc.o 00:02:26.331 CC lib/trace/trace.o 00:02:26.331 CC lib/trace/trace_flags.o 00:02:26.331 CC lib/trace/trace_rpc.o 00:02:26.588 LIB libspdk_notify.a 00:02:26.588 LIB libspdk_keyring.a 00:02:26.588 SO libspdk_notify.so.6.0 00:02:26.846 SO libspdk_keyring.so.1.0 00:02:26.846 LIB libspdk_trace.a 00:02:26.846 SYMLINK libspdk_notify.so 00:02:26.846 SO libspdk_trace.so.10.0 00:02:26.846 SYMLINK libspdk_keyring.so 00:02:26.846 SYMLINK libspdk_trace.so 00:02:27.104 CC lib/thread/thread.o 00:02:27.104 CC lib/thread/iobuf.o 00:02:27.104 CC lib/sock/sock.o 00:02:27.104 CC lib/sock/sock_rpc.o 00:02:27.668 LIB libspdk_sock.a 00:02:27.668 SO libspdk_sock.so.10.0 00:02:27.668 SYMLINK libspdk_sock.so 00:02:27.925 CC lib/nvme/nvme_ctrlr.o 00:02:27.925 CC lib/nvme/nvme_fabric.o 00:02:27.925 CC lib/nvme/nvme_ns_cmd.o 00:02:27.925 CC lib/nvme/nvme_ns.o 00:02:27.925 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:27.925 CC lib/nvme/nvme_pcie.o 00:02:27.925 CC lib/nvme/nvme_qpair.o 00:02:27.925 CC lib/nvme/nvme.o 00:02:27.925 CC lib/nvme/nvme_pcie_common.o 00:02:28.858 CC lib/nvme/nvme_quirks.o 00:02:28.858 LIB libspdk_thread.a 00:02:28.858 SO libspdk_thread.so.10.1 00:02:28.858 CC lib/nvme/nvme_transport.o 00:02:28.858 CC lib/nvme/nvme_discovery.o 00:02:28.858 SYMLINK libspdk_thread.so 00:02:28.858 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.116 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.116 CC lib/nvme/nvme_tcp.o 00:02:29.375 CC lib/nvme/nvme_opal.o 00:02:29.375 CC lib/nvme/nvme_io_msg.o 00:02:29.634 CC lib/accel/accel.o 00:02:29.634 CC lib/accel/accel_rpc.o 00:02:29.634 CC lib/accel/accel_sw.o 00:02:29.634 CC lib/nvme/nvme_poll_group.o 00:02:29.634 CC lib/nvme/nvme_zns.o 00:02:29.634 CC lib/nvme/nvme_stubs.o 00:02:29.892 CC lib/nvme/nvme_auth.o 00:02:29.892 CC lib/nvme/nvme_cuse.o 00:02:30.150 CC lib/nvme/nvme_rdma.o 00:02:30.408 CC lib/blob/blobstore.o 00:02:30.667 CC lib/virtio/virtio.o 00:02:30.667 CC lib/init/json_config.o 00:02:30.667 CC lib/init/subsystem.o 00:02:30.667 CC lib/init/subsystem_rpc.o 00:02:30.667 CC lib/init/rpc.o 00:02:30.926 CC lib/virtio/virtio_vhost_user.o 00:02:30.926 CC lib/blob/request.o 00:02:30.926 CC lib/virtio/virtio_vfio_user.o 00:02:30.926 CC lib/virtio/virtio_pci.o 00:02:30.926 LIB libspdk_init.a 00:02:30.926 LIB libspdk_accel.a 00:02:30.926 SO libspdk_init.so.5.0 00:02:30.926 SO libspdk_accel.so.16.0 00:02:31.184 SYMLINK libspdk_init.so 00:02:31.184 CC lib/blob/zeroes.o 00:02:31.184 SYMLINK libspdk_accel.so 00:02:31.184 CC lib/blob/blob_bs_dev.o 00:02:31.184 LIB libspdk_virtio.a 00:02:31.184 SO libspdk_virtio.so.7.0 00:02:31.442 CC lib/event/reactor.o 00:02:31.442 CC lib/event/log_rpc.o 00:02:31.442 CC lib/event/app_rpc.o 00:02:31.442 CC lib/event/app.o 00:02:31.442 CC lib/bdev/bdev.o 00:02:31.442 SYMLINK libspdk_virtio.so 00:02:31.442 CC lib/event/scheduler_static.o 00:02:31.442 CC lib/bdev/bdev_rpc.o 00:02:31.442 CC lib/bdev/bdev_zone.o 00:02:31.701 LIB libspdk_nvme.a 00:02:31.701 CC lib/bdev/part.o 00:02:31.701 CC lib/bdev/scsi_nvme.o 00:02:31.959 SO libspdk_nvme.so.13.1 00:02:31.959 LIB libspdk_event.a 00:02:31.959 SO libspdk_event.so.14.0 00:02:32.217 SYMLINK libspdk_event.so 00:02:32.217 SYMLINK libspdk_nvme.so 00:02:33.592 LIB libspdk_blob.a 00:02:33.592 SO libspdk_blob.so.11.0 00:02:33.850 SYMLINK libspdk_blob.so 00:02:33.850 LIB libspdk_bdev.a 00:02:34.108 SO libspdk_bdev.so.16.0 00:02:34.108 CC lib/blobfs/tree.o 00:02:34.108 CC lib/blobfs/blobfs.o 00:02:34.108 CC lib/lvol/lvol.o 00:02:34.108 SYMLINK libspdk_bdev.so 00:02:34.366 CC lib/nbd/nbd.o 00:02:34.366 CC lib/nbd/nbd_rpc.o 00:02:34.366 CC lib/ftl/ftl_core.o 00:02:34.366 CC lib/ftl/ftl_init.o 00:02:34.367 CC lib/ftl/ftl_layout.o 00:02:34.367 CC lib/nvmf/ctrlr.o 00:02:34.367 CC lib/scsi/dev.o 00:02:34.367 CC lib/ublk/ublk.o 00:02:34.625 CC lib/ublk/ublk_rpc.o 00:02:34.625 CC lib/nvmf/ctrlr_discovery.o 00:02:34.625 CC lib/scsi/lun.o 00:02:34.625 CC lib/ftl/ftl_debug.o 00:02:34.883 CC lib/ftl/ftl_io.o 00:02:34.883 LIB libspdk_nbd.a 00:02:34.883 CC lib/ftl/ftl_sb.o 00:02:34.883 SO libspdk_nbd.so.7.0 00:02:34.883 CC lib/scsi/port.o 00:02:34.883 CC lib/scsi/scsi.o 00:02:34.883 LIB libspdk_blobfs.a 00:02:34.883 SYMLINK libspdk_nbd.so 00:02:35.180 CC lib/scsi/scsi_bdev.o 00:02:35.180 LIB libspdk_ublk.a 00:02:35.180 SO libspdk_blobfs.so.10.0 00:02:35.180 CC lib/ftl/ftl_l2p.o 00:02:35.180 LIB libspdk_lvol.a 00:02:35.180 SO libspdk_lvol.so.10.0 00:02:35.180 SO libspdk_ublk.so.3.0 00:02:35.180 SYMLINK libspdk_blobfs.so 00:02:35.180 CC lib/ftl/ftl_l2p_flat.o 00:02:35.180 CC lib/nvmf/ctrlr_bdev.o 00:02:35.180 CC lib/ftl/ftl_nv_cache.o 00:02:35.180 CC lib/ftl/ftl_band.o 00:02:35.180 CC lib/ftl/ftl_band_ops.o 00:02:35.180 SYMLINK libspdk_lvol.so 00:02:35.180 CC lib/ftl/ftl_writer.o 00:02:35.180 SYMLINK libspdk_ublk.so 00:02:35.180 CC lib/ftl/ftl_rq.o 00:02:35.456 CC lib/scsi/scsi_pr.o 00:02:35.456 CC lib/ftl/ftl_reloc.o 00:02:35.456 CC lib/ftl/ftl_l2p_cache.o 00:02:35.456 CC lib/ftl/ftl_p2l.o 00:02:35.456 CC lib/ftl/mngt/ftl_mngt.o 00:02:35.456 CC lib/scsi/scsi_rpc.o 00:02:35.456 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:35.714 CC lib/nvmf/subsystem.o 00:02:35.714 CC lib/scsi/task.o 00:02:35.714 CC lib/nvmf/nvmf.o 00:02:35.714 CC lib/nvmf/nvmf_rpc.o 00:02:35.714 CC lib/nvmf/transport.o 00:02:35.714 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:35.714 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:35.973 CC lib/nvmf/tcp.o 00:02:35.973 LIB libspdk_scsi.a 00:02:35.973 CC lib/nvmf/stubs.o 00:02:35.973 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:35.973 SO libspdk_scsi.so.9.0 00:02:35.973 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:35.973 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.230 SYMLINK libspdk_scsi.so 00:02:36.230 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.488 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.488 CC lib/nvmf/mdns_server.o 00:02:36.488 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.488 CC lib/nvmf/rdma.o 00:02:36.488 CC lib/vhost/vhost.o 00:02:36.488 CC lib/iscsi/conn.o 00:02:36.488 CC lib/vhost/vhost_rpc.o 00:02:36.746 CC lib/vhost/vhost_scsi.o 00:02:36.746 CC lib/vhost/vhost_blk.o 00:02:36.746 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:37.004 CC lib/vhost/rte_vhost_user.o 00:02:37.004 CC lib/iscsi/init_grp.o 00:02:37.004 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:37.262 CC lib/iscsi/iscsi.o 00:02:37.262 CC lib/iscsi/md5.o 00:02:37.262 CC lib/iscsi/param.o 00:02:37.262 CC lib/iscsi/portal_grp.o 00:02:37.520 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:37.520 CC lib/nvmf/auth.o 00:02:37.520 CC lib/iscsi/tgt_node.o 00:02:37.520 CC lib/iscsi/iscsi_subsystem.o 00:02:37.520 CC lib/ftl/utils/ftl_conf.o 00:02:37.778 CC lib/ftl/utils/ftl_md.o 00:02:37.778 CC lib/ftl/utils/ftl_mempool.o 00:02:37.778 CC lib/iscsi/iscsi_rpc.o 00:02:37.778 CC lib/iscsi/task.o 00:02:38.037 LIB libspdk_vhost.a 00:02:38.037 CC lib/ftl/utils/ftl_bitmap.o 00:02:38.037 CC lib/ftl/utils/ftl_property.o 00:02:38.037 SO libspdk_vhost.so.8.0 00:02:38.037 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:38.037 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:38.037 SYMLINK libspdk_vhost.so 00:02:38.037 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:38.037 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:38.037 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:38.296 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:38.296 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:38.296 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:38.296 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:38.296 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:38.296 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:38.571 CC lib/ftl/base/ftl_base_dev.o 00:02:38.571 CC lib/ftl/base/ftl_base_bdev.o 00:02:38.571 CC lib/ftl/ftl_trace.o 00:02:38.571 LIB libspdk_iscsi.a 00:02:38.571 LIB libspdk_nvmf.a 00:02:38.571 SO libspdk_iscsi.so.8.0 00:02:38.830 SO libspdk_nvmf.so.19.0 00:02:38.830 LIB libspdk_ftl.a 00:02:38.830 SYMLINK libspdk_iscsi.so 00:02:38.830 SYMLINK libspdk_nvmf.so 00:02:39.089 SO libspdk_ftl.so.9.0 00:02:39.348 SYMLINK libspdk_ftl.so 00:02:39.915 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.915 CC module/accel/dsa/accel_dsa.o 00:02:39.915 CC module/keyring/linux/keyring.o 00:02:39.915 CC module/keyring/file/keyring.o 00:02:39.915 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.915 CC module/accel/iaa/accel_iaa.o 00:02:39.915 CC module/accel/ioat/accel_ioat.o 00:02:39.915 LIB libspdk_env_dpdk_rpc.a 00:02:39.915 CC module/accel/error/accel_error.o 00:02:39.915 CC module/sock/posix/posix.o 00:02:39.915 CC module/blob/bdev/blob_bdev.o 00:02:39.915 SO libspdk_env_dpdk_rpc.so.6.0 00:02:40.174 SYMLINK libspdk_env_dpdk_rpc.so 00:02:40.174 CC module/accel/iaa/accel_iaa_rpc.o 00:02:40.174 CC module/keyring/file/keyring_rpc.o 00:02:40.174 CC module/keyring/linux/keyring_rpc.o 00:02:40.174 CC module/accel/error/accel_error_rpc.o 00:02:40.175 LIB libspdk_accel_iaa.a 00:02:40.175 CC module/accel/dsa/accel_dsa_rpc.o 00:02:40.175 SO libspdk_accel_iaa.so.3.0 00:02:40.175 LIB libspdk_keyring_file.a 00:02:40.175 CC module/accel/ioat/accel_ioat_rpc.o 00:02:40.433 SO libspdk_keyring_file.so.1.0 00:02:40.433 LIB libspdk_scheduler_dynamic.a 00:02:40.433 LIB libspdk_blob_bdev.a 00:02:40.433 LIB libspdk_keyring_linux.a 00:02:40.433 SO libspdk_scheduler_dynamic.so.4.0 00:02:40.433 SO libspdk_blob_bdev.so.11.0 00:02:40.433 SYMLINK libspdk_accel_iaa.so 00:02:40.433 SO libspdk_keyring_linux.so.1.0 00:02:40.433 LIB libspdk_accel_dsa.a 00:02:40.433 LIB libspdk_accel_error.a 00:02:40.433 SYMLINK libspdk_keyring_file.so 00:02:40.433 SO libspdk_accel_dsa.so.5.0 00:02:40.433 SYMLINK libspdk_scheduler_dynamic.so 00:02:40.433 SYMLINK libspdk_blob_bdev.so 00:02:40.433 SYMLINK libspdk_keyring_linux.so 00:02:40.433 SO libspdk_accel_error.so.2.0 00:02:40.433 LIB libspdk_accel_ioat.a 00:02:40.433 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:40.691 SYMLINK libspdk_accel_dsa.so 00:02:40.691 SO libspdk_accel_ioat.so.6.0 00:02:40.691 SYMLINK libspdk_accel_error.so 00:02:40.691 SYMLINK libspdk_accel_ioat.so 00:02:40.691 CC module/scheduler/gscheduler/gscheduler.o 00:02:40.950 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.950 CC module/bdev/delay/vbdev_delay.o 00:02:40.950 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.950 CC module/bdev/malloc/bdev_malloc.o 00:02:40.950 CC module/bdev/error/vbdev_error.o 00:02:40.950 CC module/bdev/gpt/gpt.o 00:02:40.950 LIB libspdk_scheduler_gscheduler.a 00:02:40.950 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:40.950 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.950 SO libspdk_scheduler_gscheduler.so.4.0 00:02:40.950 LIB libspdk_sock_posix.a 00:02:40.950 CC module/bdev/null/bdev_null.o 00:02:40.950 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:40.950 SO libspdk_sock_posix.so.6.0 00:02:40.950 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.950 SYMLINK libspdk_scheduler_gscheduler.so 00:02:40.950 CC module/bdev/error/vbdev_error_rpc.o 00:02:40.950 SYMLINK libspdk_sock_posix.so 00:02:40.950 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:41.208 CC module/bdev/gpt/vbdev_gpt.o 00:02:41.208 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:41.208 CC module/bdev/null/bdev_null_rpc.o 00:02:41.208 LIB libspdk_blobfs_bdev.a 00:02:41.208 SO libspdk_blobfs_bdev.so.6.0 00:02:41.467 LIB libspdk_bdev_error.a 00:02:41.467 CC module/bdev/nvme/bdev_nvme.o 00:02:41.467 CC module/bdev/passthru/vbdev_passthru.o 00:02:41.467 LIB libspdk_bdev_gpt.a 00:02:41.467 LIB libspdk_bdev_malloc.a 00:02:41.467 SO libspdk_bdev_error.so.6.0 00:02:41.467 SO libspdk_bdev_gpt.so.6.0 00:02:41.467 SYMLINK libspdk_blobfs_bdev.so 00:02:41.467 SO libspdk_bdev_malloc.so.6.0 00:02:41.467 LIB libspdk_bdev_delay.a 00:02:41.467 CC module/bdev/raid/bdev_raid.o 00:02:41.467 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:41.467 SO libspdk_bdev_delay.so.6.0 00:02:41.467 LIB libspdk_bdev_null.a 00:02:41.467 SYMLINK libspdk_bdev_error.so 00:02:41.467 SYMLINK libspdk_bdev_malloc.so 00:02:41.467 SYMLINK libspdk_bdev_gpt.so 00:02:41.467 SO libspdk_bdev_null.so.6.0 00:02:41.467 SYMLINK libspdk_bdev_delay.so 00:02:41.467 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:41.467 CC module/bdev/raid/bdev_raid_rpc.o 00:02:41.725 SYMLINK libspdk_bdev_null.so 00:02:41.725 LIB libspdk_bdev_passthru.a 00:02:41.725 CC module/bdev/split/vbdev_split.o 00:02:41.725 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:41.725 CC module/bdev/aio/bdev_aio.o 00:02:41.725 SO libspdk_bdev_passthru.so.6.0 00:02:41.725 CC module/bdev/ftl/bdev_ftl.o 00:02:41.725 CC module/bdev/aio/bdev_aio_rpc.o 00:02:41.984 SYMLINK libspdk_bdev_passthru.so 00:02:41.984 LIB libspdk_bdev_lvol.a 00:02:41.984 CC module/bdev/iscsi/bdev_iscsi.o 00:02:41.984 CC module/bdev/split/vbdev_split_rpc.o 00:02:41.984 SO libspdk_bdev_lvol.so.6.0 00:02:41.984 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:41.984 SYMLINK libspdk_bdev_lvol.so 00:02:41.984 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:42.243 LIB libspdk_bdev_aio.a 00:02:42.243 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:42.243 CC module/bdev/nvme/nvme_rpc.o 00:02:42.243 SO libspdk_bdev_aio.so.6.0 00:02:42.243 LIB libspdk_bdev_split.a 00:02:42.243 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:42.243 SO libspdk_bdev_split.so.6.0 00:02:42.243 LIB libspdk_bdev_zone_block.a 00:02:42.243 SYMLINK libspdk_bdev_aio.so 00:02:42.243 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:42.243 SO libspdk_bdev_zone_block.so.6.0 00:02:42.243 SYMLINK libspdk_bdev_split.so 00:02:42.243 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:42.243 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:42.501 SYMLINK libspdk_bdev_zone_block.so 00:02:42.501 CC module/bdev/nvme/bdev_mdns_client.o 00:02:42.501 LIB libspdk_bdev_ftl.a 00:02:42.501 CC module/bdev/nvme/vbdev_opal.o 00:02:42.501 SO libspdk_bdev_ftl.so.6.0 00:02:42.501 LIB libspdk_bdev_iscsi.a 00:02:42.501 CC module/bdev/raid/bdev_raid_sb.o 00:02:42.501 SO libspdk_bdev_iscsi.so.6.0 00:02:42.501 SYMLINK libspdk_bdev_ftl.so 00:02:42.501 CC module/bdev/raid/raid0.o 00:02:42.501 CC module/bdev/raid/raid1.o 00:02:42.501 SYMLINK libspdk_bdev_iscsi.so 00:02:42.760 CC module/bdev/raid/concat.o 00:02:42.760 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:42.760 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:42.760 LIB libspdk_bdev_virtio.a 00:02:42.760 LIB libspdk_bdev_raid.a 00:02:43.018 SO libspdk_bdev_virtio.so.6.0 00:02:43.018 SO libspdk_bdev_raid.so.6.0 00:02:43.018 SYMLINK libspdk_bdev_virtio.so 00:02:43.018 SYMLINK libspdk_bdev_raid.so 00:02:43.585 LIB libspdk_bdev_nvme.a 00:02:43.844 SO libspdk_bdev_nvme.so.7.0 00:02:43.844 SYMLINK libspdk_bdev_nvme.so 00:02:44.410 CC module/event/subsystems/keyring/keyring.o 00:02:44.410 CC module/event/subsystems/scheduler/scheduler.o 00:02:44.410 CC module/event/subsystems/sock/sock.o 00:02:44.410 CC module/event/subsystems/iobuf/iobuf.o 00:02:44.410 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:44.410 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:44.410 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:44.410 CC module/event/subsystems/vmd/vmd.o 00:02:44.410 LIB libspdk_event_keyring.a 00:02:44.410 LIB libspdk_event_scheduler.a 00:02:44.668 LIB libspdk_event_vhost_blk.a 00:02:44.668 LIB libspdk_event_sock.a 00:02:44.668 SO libspdk_event_keyring.so.1.0 00:02:44.668 SO libspdk_event_scheduler.so.4.0 00:02:44.668 SO libspdk_event_vhost_blk.so.3.0 00:02:44.668 SO libspdk_event_sock.so.5.0 00:02:44.668 LIB libspdk_event_vmd.a 00:02:44.668 SYMLINK libspdk_event_keyring.so 00:02:44.668 LIB libspdk_event_iobuf.a 00:02:44.668 SYMLINK libspdk_event_scheduler.so 00:02:44.668 SO libspdk_event_vmd.so.6.0 00:02:44.668 SYMLINK libspdk_event_vhost_blk.so 00:02:44.668 SYMLINK libspdk_event_sock.so 00:02:44.668 SO libspdk_event_iobuf.so.3.0 00:02:44.668 SYMLINK libspdk_event_vmd.so 00:02:44.668 SYMLINK libspdk_event_iobuf.so 00:02:45.234 CC module/event/subsystems/accel/accel.o 00:02:45.234 LIB libspdk_event_accel.a 00:02:45.234 SO libspdk_event_accel.so.6.0 00:02:45.234 SYMLINK libspdk_event_accel.so 00:02:45.799 CC module/event/subsystems/bdev/bdev.o 00:02:45.799 LIB libspdk_event_bdev.a 00:02:45.799 SO libspdk_event_bdev.so.6.0 00:02:46.057 SYMLINK libspdk_event_bdev.so 00:02:46.315 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:46.315 CC module/event/subsystems/nbd/nbd.o 00:02:46.315 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:46.315 CC module/event/subsystems/scsi/scsi.o 00:02:46.315 CC module/event/subsystems/ublk/ublk.o 00:02:46.574 LIB libspdk_event_ublk.a 00:02:46.574 LIB libspdk_event_nbd.a 00:02:46.574 LIB libspdk_event_scsi.a 00:02:46.574 SO libspdk_event_ublk.so.3.0 00:02:46.574 SO libspdk_event_nbd.so.6.0 00:02:46.574 SO libspdk_event_scsi.so.6.0 00:02:46.574 LIB libspdk_event_nvmf.a 00:02:46.574 SO libspdk_event_nvmf.so.6.0 00:02:46.574 SYMLINK libspdk_event_scsi.so 00:02:46.574 SYMLINK libspdk_event_ublk.so 00:02:46.574 SYMLINK libspdk_event_nbd.so 00:02:46.574 SYMLINK libspdk_event_nvmf.so 00:02:46.832 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:46.832 CC module/event/subsystems/iscsi/iscsi.o 00:02:47.090 LIB libspdk_event_vhost_scsi.a 00:02:47.090 SO libspdk_event_vhost_scsi.so.3.0 00:02:47.090 LIB libspdk_event_iscsi.a 00:02:47.090 SO libspdk_event_iscsi.so.6.0 00:02:47.090 SYMLINK libspdk_event_vhost_scsi.so 00:02:47.348 SYMLINK libspdk_event_iscsi.so 00:02:47.348 SO libspdk.so.6.0 00:02:47.348 SYMLINK libspdk.so 00:02:47.608 CC app/trace_record/trace_record.o 00:02:47.608 CXX app/trace/trace.o 00:02:47.608 CC app/spdk_lspci/spdk_lspci.o 00:02:47.608 CC app/nvmf_tgt/nvmf_main.o 00:02:47.926 CC app/iscsi_tgt/iscsi_tgt.o 00:02:47.926 CC app/spdk_tgt/spdk_tgt.o 00:02:47.926 CC examples/ioat/perf/perf.o 00:02:47.926 CC examples/util/zipf/zipf.o 00:02:47.926 CC test/thread/poller_perf/poller_perf.o 00:02:47.926 LINK spdk_lspci 00:02:47.926 LINK nvmf_tgt 00:02:47.926 LINK spdk_trace_record 00:02:47.926 LINK zipf 00:02:47.926 LINK spdk_tgt 00:02:48.184 LINK iscsi_tgt 00:02:48.184 LINK poller_perf 00:02:48.184 LINK ioat_perf 00:02:48.184 LINK spdk_trace 00:02:48.443 CC app/spdk_nvme_perf/perf.o 00:02:48.443 CC app/spdk_nvme_identify/identify.o 00:02:48.443 CC examples/ioat/verify/verify.o 00:02:48.443 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:48.443 CC app/spdk_nvme_discover/discovery_aer.o 00:02:48.443 CC test/app/bdev_svc/bdev_svc.o 00:02:48.443 CC test/dma/test_dma/test_dma.o 00:02:48.443 CC examples/thread/thread/thread_ex.o 00:02:48.702 CC examples/sock/hello_world/hello_sock.o 00:02:48.702 LINK interrupt_tgt 00:02:48.702 LINK spdk_nvme_discover 00:02:48.702 LINK verify 00:02:48.702 LINK bdev_svc 00:02:48.702 LINK thread 00:02:48.960 LINK hello_sock 00:02:48.960 LINK test_dma 00:02:48.960 TEST_HEADER include/spdk/accel.h 00:02:48.960 TEST_HEADER include/spdk/accel_module.h 00:02:48.960 TEST_HEADER include/spdk/assert.h 00:02:48.960 TEST_HEADER include/spdk/barrier.h 00:02:48.960 TEST_HEADER include/spdk/base64.h 00:02:48.960 TEST_HEADER include/spdk/bdev.h 00:02:48.960 TEST_HEADER include/spdk/bdev_module.h 00:02:48.960 TEST_HEADER include/spdk/bdev_zone.h 00:02:48.960 TEST_HEADER include/spdk/bit_array.h 00:02:48.960 TEST_HEADER include/spdk/bit_pool.h 00:02:48.960 TEST_HEADER include/spdk/blob_bdev.h 00:02:48.960 CC app/spdk_top/spdk_top.o 00:02:48.960 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:48.960 TEST_HEADER include/spdk/blobfs.h 00:02:48.960 TEST_HEADER include/spdk/blob.h 00:02:48.960 TEST_HEADER include/spdk/conf.h 00:02:48.960 TEST_HEADER include/spdk/config.h 00:02:48.960 TEST_HEADER include/spdk/cpuset.h 00:02:48.960 TEST_HEADER include/spdk/crc16.h 00:02:48.960 TEST_HEADER include/spdk/crc32.h 00:02:48.960 TEST_HEADER include/spdk/crc64.h 00:02:49.218 TEST_HEADER include/spdk/dif.h 00:02:49.218 TEST_HEADER include/spdk/dma.h 00:02:49.218 TEST_HEADER include/spdk/endian.h 00:02:49.218 TEST_HEADER include/spdk/env_dpdk.h 00:02:49.218 TEST_HEADER include/spdk/env.h 00:02:49.218 TEST_HEADER include/spdk/event.h 00:02:49.218 TEST_HEADER include/spdk/fd_group.h 00:02:49.218 TEST_HEADER include/spdk/fd.h 00:02:49.218 TEST_HEADER include/spdk/file.h 00:02:49.218 TEST_HEADER include/spdk/ftl.h 00:02:49.218 TEST_HEADER include/spdk/gpt_spec.h 00:02:49.218 TEST_HEADER include/spdk/hexlify.h 00:02:49.218 TEST_HEADER include/spdk/histogram_data.h 00:02:49.218 TEST_HEADER include/spdk/idxd.h 00:02:49.218 TEST_HEADER include/spdk/idxd_spec.h 00:02:49.218 TEST_HEADER include/spdk/init.h 00:02:49.218 TEST_HEADER include/spdk/ioat.h 00:02:49.218 TEST_HEADER include/spdk/ioat_spec.h 00:02:49.218 TEST_HEADER include/spdk/iscsi_spec.h 00:02:49.218 TEST_HEADER include/spdk/json.h 00:02:49.218 TEST_HEADER include/spdk/jsonrpc.h 00:02:49.218 TEST_HEADER include/spdk/keyring.h 00:02:49.218 TEST_HEADER include/spdk/keyring_module.h 00:02:49.218 TEST_HEADER include/spdk/likely.h 00:02:49.218 TEST_HEADER include/spdk/log.h 00:02:49.218 TEST_HEADER include/spdk/lvol.h 00:02:49.218 TEST_HEADER include/spdk/memory.h 00:02:49.218 TEST_HEADER include/spdk/mmio.h 00:02:49.218 TEST_HEADER include/spdk/nbd.h 00:02:49.218 TEST_HEADER include/spdk/net.h 00:02:49.218 TEST_HEADER include/spdk/notify.h 00:02:49.218 TEST_HEADER include/spdk/nvme.h 00:02:49.218 TEST_HEADER include/spdk/nvme_intel.h 00:02:49.218 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:49.218 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:49.218 TEST_HEADER include/spdk/nvme_spec.h 00:02:49.218 TEST_HEADER include/spdk/nvme_zns.h 00:02:49.218 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:49.218 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:49.218 TEST_HEADER include/spdk/nvmf.h 00:02:49.218 TEST_HEADER include/spdk/nvmf_spec.h 00:02:49.218 TEST_HEADER include/spdk/nvmf_transport.h 00:02:49.218 TEST_HEADER include/spdk/opal.h 00:02:49.218 TEST_HEADER include/spdk/opal_spec.h 00:02:49.218 TEST_HEADER include/spdk/pci_ids.h 00:02:49.218 TEST_HEADER include/spdk/pipe.h 00:02:49.218 TEST_HEADER include/spdk/queue.h 00:02:49.218 CC test/env/vtophys/vtophys.o 00:02:49.218 TEST_HEADER include/spdk/reduce.h 00:02:49.218 LINK spdk_nvme_perf 00:02:49.218 TEST_HEADER include/spdk/rpc.h 00:02:49.218 TEST_HEADER include/spdk/scheduler.h 00:02:49.218 TEST_HEADER include/spdk/scsi.h 00:02:49.218 TEST_HEADER include/spdk/scsi_spec.h 00:02:49.218 TEST_HEADER include/spdk/sock.h 00:02:49.218 TEST_HEADER include/spdk/stdinc.h 00:02:49.218 TEST_HEADER include/spdk/string.h 00:02:49.218 TEST_HEADER include/spdk/thread.h 00:02:49.218 CC test/env/mem_callbacks/mem_callbacks.o 00:02:49.218 TEST_HEADER include/spdk/trace.h 00:02:49.218 TEST_HEADER include/spdk/trace_parser.h 00:02:49.218 TEST_HEADER include/spdk/tree.h 00:02:49.218 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:49.218 TEST_HEADER include/spdk/ublk.h 00:02:49.218 TEST_HEADER include/spdk/util.h 00:02:49.218 TEST_HEADER include/spdk/uuid.h 00:02:49.218 TEST_HEADER include/spdk/version.h 00:02:49.218 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:49.218 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:49.218 TEST_HEADER include/spdk/vhost.h 00:02:49.218 TEST_HEADER include/spdk/vmd.h 00:02:49.218 TEST_HEADER include/spdk/xor.h 00:02:49.218 TEST_HEADER include/spdk/zipf.h 00:02:49.218 CXX test/cpp_headers/accel.o 00:02:49.218 CC app/vhost/vhost.o 00:02:49.218 LINK spdk_nvme_identify 00:02:49.476 LINK vtophys 00:02:49.476 CC app/spdk_dd/spdk_dd.o 00:02:49.476 CXX test/cpp_headers/accel_module.o 00:02:49.476 LINK vhost 00:02:49.476 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:49.734 CC test/env/memory/memory_ut.o 00:02:49.734 CXX test/cpp_headers/assert.o 00:02:49.734 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:49.734 LINK nvme_fuzz 00:02:49.734 LINK env_dpdk_post_init 00:02:49.734 LINK spdk_dd 00:02:49.992 CXX test/cpp_headers/barrier.o 00:02:49.992 LINK mem_callbacks 00:02:49.992 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:49.992 LINK spdk_top 00:02:49.992 CXX test/cpp_headers/base64.o 00:02:49.992 CC test/env/pci/pci_ut.o 00:02:50.250 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:50.250 CXX test/cpp_headers/bdev.o 00:02:50.250 CC app/fio/nvme/fio_plugin.o 00:02:50.250 CC test/event/event_perf/event_perf.o 00:02:50.534 CC test/nvme/aer/aer.o 00:02:50.534 CC test/event/reactor/reactor.o 00:02:50.534 CXX test/cpp_headers/bdev_module.o 00:02:50.534 LINK pci_ut 00:02:50.534 LINK event_perf 00:02:50.802 LINK vhost_fuzz 00:02:50.802 LINK reactor 00:02:50.802 LINK aer 00:02:50.802 CXX test/cpp_headers/bdev_zone.o 00:02:50.802 LINK memory_ut 00:02:51.060 LINK spdk_nvme 00:02:51.060 CC test/nvme/reset/reset.o 00:02:51.060 CC test/nvme/sgl/sgl.o 00:02:51.060 CC test/event/reactor_perf/reactor_perf.o 00:02:51.060 CXX test/cpp_headers/bit_array.o 00:02:51.060 CC app/fio/bdev/fio_plugin.o 00:02:51.060 CXX test/cpp_headers/bit_pool.o 00:02:51.060 CC test/event/app_repeat/app_repeat.o 00:02:51.319 CC test/event/scheduler/scheduler.o 00:02:51.319 LINK reactor_perf 00:02:51.319 CXX test/cpp_headers/blob_bdev.o 00:02:51.319 LINK reset 00:02:51.319 LINK app_repeat 00:02:51.319 LINK sgl 00:02:51.319 CXX test/cpp_headers/blobfs_bdev.o 00:02:51.577 LINK scheduler 00:02:51.577 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.577 CXX test/cpp_headers/blobfs.o 00:02:51.577 LINK iscsi_fuzz 00:02:51.835 LINK spdk_bdev 00:02:51.835 LINK lsvmd 00:02:51.835 CC examples/idxd/perf/perf.o 00:02:51.835 CXX test/cpp_headers/blob.o 00:02:51.835 CC test/nvme/e2edp/nvme_dp.o 00:02:51.835 CC examples/accel/perf/accel_perf.o 00:02:52.093 CXX test/cpp_headers/conf.o 00:02:52.093 CC examples/blob/hello_world/hello_blob.o 00:02:52.093 CC test/nvme/overhead/overhead.o 00:02:52.352 CXX test/cpp_headers/config.o 00:02:52.352 LINK idxd_perf 00:02:52.352 CXX test/cpp_headers/cpuset.o 00:02:52.352 CC test/app/histogram_perf/histogram_perf.o 00:02:52.352 LINK hello_blob 00:02:52.352 CC examples/vmd/led/led.o 00:02:52.352 CC examples/blob/cli/blobcli.o 00:02:52.352 LINK nvme_dp 00:02:52.610 LINK overhead 00:02:52.610 CXX test/cpp_headers/crc16.o 00:02:52.610 LINK histogram_perf 00:02:52.610 LINK accel_perf 00:02:52.610 LINK led 00:02:52.610 CXX test/cpp_headers/crc32.o 00:02:52.869 CC test/nvme/err_injection/err_injection.o 00:02:52.869 CC examples/nvme/hello_world/hello_world.o 00:02:52.869 CXX test/cpp_headers/crc64.o 00:02:52.869 CC examples/nvme/reconnect/reconnect.o 00:02:52.869 CC test/app/jsoncat/jsoncat.o 00:02:52.869 CC test/rpc_client/rpc_client_test.o 00:02:52.869 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:52.869 CC examples/nvme/arbitration/arbitration.o 00:02:52.869 LINK blobcli 00:02:52.869 LINK err_injection 00:02:53.127 CXX test/cpp_headers/dif.o 00:02:53.127 LINK rpc_client_test 00:02:53.127 LINK jsoncat 00:02:53.127 LINK hello_world 00:02:53.385 LINK reconnect 00:02:53.385 CXX test/cpp_headers/dma.o 00:02:53.385 CXX test/cpp_headers/endian.o 00:02:53.385 CC test/nvme/startup/startup.o 00:02:53.385 LINK arbitration 00:02:53.385 CC test/app/stub/stub.o 00:02:53.385 CXX test/cpp_headers/env_dpdk.o 00:02:53.644 CC examples/nvme/hotplug/hotplug.o 00:02:53.644 LINK nvme_manage 00:02:53.644 LINK startup 00:02:53.644 CXX test/cpp_headers/env.o 00:02:53.644 LINK stub 00:02:53.644 CXX test/cpp_headers/event.o 00:02:53.644 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:53.644 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:53.644 CC examples/nvme/abort/abort.o 00:02:53.644 CXX test/cpp_headers/fd_group.o 00:02:53.903 CXX test/cpp_headers/fd.o 00:02:53.903 CXX test/cpp_headers/file.o 00:02:53.903 LINK hotplug 00:02:53.903 CC test/nvme/reserve/reserve.o 00:02:53.903 LINK pmr_persistence 00:02:53.903 LINK cmb_copy 00:02:53.903 CXX test/cpp_headers/ftl.o 00:02:53.903 CXX test/cpp_headers/gpt_spec.o 00:02:54.161 CXX test/cpp_headers/hexlify.o 00:02:54.161 LINK reserve 00:02:54.161 LINK abort 00:02:54.161 CXX test/cpp_headers/histogram_data.o 00:02:54.161 CC examples/bdev/hello_world/hello_bdev.o 00:02:54.161 CC examples/bdev/bdevperf/bdevperf.o 00:02:54.161 CXX test/cpp_headers/idxd.o 00:02:54.419 CXX test/cpp_headers/idxd_spec.o 00:02:54.419 CXX test/cpp_headers/init.o 00:02:54.419 CC test/accel/dif/dif.o 00:02:54.419 LINK hello_bdev 00:02:54.419 CC test/nvme/simple_copy/simple_copy.o 00:02:54.419 CC test/blobfs/mkfs/mkfs.o 00:02:54.677 CXX test/cpp_headers/ioat.o 00:02:54.677 CC test/nvme/connect_stress/connect_stress.o 00:02:54.677 CC test/nvme/boot_partition/boot_partition.o 00:02:54.677 CC test/lvol/esnap/esnap.o 00:02:54.677 CXX test/cpp_headers/ioat_spec.o 00:02:54.677 LINK simple_copy 00:02:54.677 LINK mkfs 00:02:54.935 LINK boot_partition 00:02:54.935 LINK connect_stress 00:02:54.935 CXX test/cpp_headers/iscsi_spec.o 00:02:54.935 LINK dif 00:02:54.935 CC test/nvme/compliance/nvme_compliance.o 00:02:54.935 CXX test/cpp_headers/json.o 00:02:54.935 CXX test/cpp_headers/jsonrpc.o 00:02:55.193 LINK bdevperf 00:02:55.193 CXX test/cpp_headers/keyring.o 00:02:55.193 CXX test/cpp_headers/keyring_module.o 00:02:55.193 CC test/nvme/fused_ordering/fused_ordering.o 00:02:55.193 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:55.193 CXX test/cpp_headers/likely.o 00:02:55.193 CC test/nvme/fdp/fdp.o 00:02:55.193 CC test/nvme/cuse/cuse.o 00:02:55.451 LINK nvme_compliance 00:02:55.451 CXX test/cpp_headers/log.o 00:02:55.451 LINK fused_ordering 00:02:55.451 LINK doorbell_aers 00:02:55.451 CXX test/cpp_headers/lvol.o 00:02:55.451 CXX test/cpp_headers/memory.o 00:02:55.709 CXX test/cpp_headers/mmio.o 00:02:55.709 CC test/bdev/bdevio/bdevio.o 00:02:55.709 CXX test/cpp_headers/nbd.o 00:02:55.709 LINK fdp 00:02:55.709 CXX test/cpp_headers/net.o 00:02:55.709 CC examples/nvmf/nvmf/nvmf.o 00:02:55.709 CXX test/cpp_headers/notify.o 00:02:55.709 CXX test/cpp_headers/nvme.o 00:02:55.709 CXX test/cpp_headers/nvme_intel.o 00:02:55.709 CXX test/cpp_headers/nvme_ocssd.o 00:02:55.967 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:55.967 CXX test/cpp_headers/nvme_spec.o 00:02:55.967 CXX test/cpp_headers/nvme_zns.o 00:02:55.967 CXX test/cpp_headers/nvmf_cmd.o 00:02:55.967 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:55.967 LINK bdevio 00:02:55.967 CXX test/cpp_headers/nvmf.o 00:02:55.967 CXX test/cpp_headers/nvmf_spec.o 00:02:55.967 CXX test/cpp_headers/nvmf_transport.o 00:02:56.254 CXX test/cpp_headers/opal.o 00:02:56.254 CXX test/cpp_headers/opal_spec.o 00:02:56.254 LINK nvmf 00:02:56.254 CXX test/cpp_headers/pci_ids.o 00:02:56.254 CXX test/cpp_headers/pipe.o 00:02:56.254 CXX test/cpp_headers/queue.o 00:02:56.254 CXX test/cpp_headers/reduce.o 00:02:56.254 CXX test/cpp_headers/rpc.o 00:02:56.254 CXX test/cpp_headers/scheduler.o 00:02:56.254 CXX test/cpp_headers/scsi.o 00:02:56.254 CXX test/cpp_headers/scsi_spec.o 00:02:56.254 CXX test/cpp_headers/sock.o 00:02:56.254 CXX test/cpp_headers/stdinc.o 00:02:56.516 CXX test/cpp_headers/string.o 00:02:56.516 CXX test/cpp_headers/thread.o 00:02:56.516 CXX test/cpp_headers/trace.o 00:02:56.516 CXX test/cpp_headers/trace_parser.o 00:02:56.516 CXX test/cpp_headers/tree.o 00:02:56.516 CXX test/cpp_headers/ublk.o 00:02:56.516 CXX test/cpp_headers/util.o 00:02:56.516 CXX test/cpp_headers/uuid.o 00:02:56.516 CXX test/cpp_headers/version.o 00:02:56.516 LINK cuse 00:02:56.516 CXX test/cpp_headers/vfio_user_pci.o 00:02:56.516 CXX test/cpp_headers/vfio_user_spec.o 00:02:56.774 CXX test/cpp_headers/vhost.o 00:02:56.774 CXX test/cpp_headers/vmd.o 00:02:56.774 CXX test/cpp_headers/xor.o 00:02:56.774 CXX test/cpp_headers/zipf.o 00:03:00.061 LINK esnap 00:03:00.320 00:03:00.320 real 1m14.790s 00:03:00.320 user 7m20.807s 00:03:00.320 sys 2m3.384s 00:03:00.320 17:50:07 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:00.320 17:50:07 make -- common/autotest_common.sh@10 -- $ set +x 00:03:00.320 ************************************ 00:03:00.320 END TEST make 00:03:00.320 ************************************ 00:03:00.320 17:50:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:00.321 17:50:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:00.321 17:50:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:00.321 17:50:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.321 17:50:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:00.321 17:50:07 -- pm/common@44 -- $ pid=5190 00:03:00.321 17:50:07 -- pm/common@50 -- $ kill -TERM 5190 00:03:00.321 17:50:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.321 17:50:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:00.321 17:50:07 -- pm/common@44 -- $ pid=5192 00:03:00.321 17:50:07 -- pm/common@50 -- $ kill -TERM 5192 00:03:00.579 17:50:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:00.579 17:50:07 -- nvmf/common.sh@7 -- # uname -s 00:03:00.579 17:50:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:00.579 17:50:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:00.579 17:50:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:00.579 17:50:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:00.579 17:50:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:00.579 17:50:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:00.579 17:50:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:00.579 17:50:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:00.579 17:50:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:00.579 17:50:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:00.579 17:50:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:03:00.579 17:50:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:03:00.579 17:50:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:00.579 17:50:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:00.579 17:50:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:00.579 17:50:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:00.579 17:50:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:00.579 17:50:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:00.579 17:50:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.579 17:50:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.580 17:50:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.580 17:50:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.580 17:50:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.580 17:50:07 -- paths/export.sh@5 -- # export PATH 00:03:00.580 17:50:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.580 17:50:07 -- nvmf/common.sh@47 -- # : 0 00:03:00.580 17:50:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:00.580 17:50:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:00.580 17:50:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:00.580 17:50:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:00.580 17:50:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:00.580 17:50:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:00.580 17:50:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:00.580 17:50:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:00.580 17:50:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:00.580 17:50:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:00.580 17:50:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:00.580 17:50:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:00.580 17:50:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:00.580 17:50:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:00.580 17:50:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:00.580 17:50:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:00.580 17:50:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:00.580 17:50:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:00.580 17:50:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54658 00:03:00.580 17:50:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:00.580 17:50:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:00.580 17:50:07 -- pm/common@17 -- # local monitor 00:03:00.580 17:50:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.580 17:50:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.580 17:50:07 -- pm/common@25 -- # sleep 1 00:03:00.580 17:50:07 -- pm/common@21 -- # date +%s 00:03:00.580 17:50:07 -- pm/common@21 -- # date +%s 00:03:00.580 17:50:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721843407 00:03:00.580 17:50:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721843407 00:03:00.580 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721843407_collect-cpu-load.pm.log 00:03:00.580 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721843407_collect-vmstat.pm.log 00:03:01.515 17:50:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:01.515 17:50:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:01.515 17:50:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:01.515 17:50:08 -- common/autotest_common.sh@10 -- # set +x 00:03:01.515 17:50:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:01.515 17:50:08 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:01.515 17:50:08 -- common/autotest_common.sh@10 -- # set +x 00:03:01.515 17:50:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:01.515 17:50:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:01.515 17:50:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:01.515 17:50:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:01.515 17:50:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:01.515 17:50:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:01.515 17:50:08 -- common/autotest_common.sh@1455 -- # uname 00:03:01.515 17:50:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:01.515 17:50:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:01.515 17:50:08 -- common/autotest_common.sh@1475 -- # uname 00:03:01.515 17:50:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:01.515 17:50:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:01.515 17:50:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:01.515 17:50:08 -- spdk/autotest.sh@72 -- # hash lcov 00:03:01.515 17:50:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:01.515 17:50:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:01.515 --rc lcov_branch_coverage=1 00:03:01.515 --rc lcov_function_coverage=1 00:03:01.515 --rc genhtml_branch_coverage=1 00:03:01.515 --rc genhtml_function_coverage=1 00:03:01.515 --rc genhtml_legend=1 00:03:01.515 --rc geninfo_all_blocks=1 00:03:01.515 ' 00:03:01.515 17:50:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:01.515 --rc lcov_branch_coverage=1 00:03:01.515 --rc lcov_function_coverage=1 00:03:01.515 --rc genhtml_branch_coverage=1 00:03:01.515 --rc genhtml_function_coverage=1 00:03:01.515 --rc genhtml_legend=1 00:03:01.515 --rc geninfo_all_blocks=1 00:03:01.515 ' 00:03:01.515 17:50:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:01.515 --rc lcov_branch_coverage=1 00:03:01.515 --rc lcov_function_coverage=1 00:03:01.515 --rc genhtml_branch_coverage=1 00:03:01.515 --rc genhtml_function_coverage=1 00:03:01.515 --rc genhtml_legend=1 00:03:01.515 --rc geninfo_all_blocks=1 00:03:01.515 --no-external' 00:03:01.515 17:50:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:01.515 --rc lcov_branch_coverage=1 00:03:01.515 --rc lcov_function_coverage=1 00:03:01.515 --rc genhtml_branch_coverage=1 00:03:01.515 --rc genhtml_function_coverage=1 00:03:01.515 --rc genhtml_legend=1 00:03:01.515 --rc geninfo_all_blocks=1 00:03:01.515 --no-external' 00:03:01.515 17:50:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:01.773 lcov: LCOV version 1.14 00:03:01.773 17:50:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:19.867 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:19.867 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:29.851 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:29.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:29.851 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:29.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:29.851 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:29.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:29.851 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:29.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:29.851 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:29.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:29.851 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:29.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:29.851 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:29.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:29.851 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:29.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:29.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:29.852 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:29.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:29.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:33.139 17:50:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:33.139 17:50:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:33.139 17:50:39 -- common/autotest_common.sh@10 -- # set +x 00:03:33.139 17:50:39 -- spdk/autotest.sh@91 -- # rm -f 00:03:33.139 17:50:39 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:33.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.706 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:33.706 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:33.706 17:50:40 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:33.706 17:50:40 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:33.706 17:50:40 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:33.706 17:50:40 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:33.706 17:50:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.706 17:50:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:33.706 17:50:40 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:33.706 17:50:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.706 17:50:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.706 17:50:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.706 17:50:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:33.706 17:50:40 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:33.706 17:50:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:33.706 17:50:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.706 17:50:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.706 17:50:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:33.706 17:50:40 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:33.706 17:50:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:33.706 17:50:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.706 17:50:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.706 17:50:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:33.706 17:50:40 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:33.706 17:50:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:33.706 17:50:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.707 17:50:40 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:33.707 17:50:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.707 17:50:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:33.707 17:50:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:33.707 17:50:40 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:33.707 17:50:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:33.707 No valid GPT data, bailing 00:03:33.707 17:50:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:33.707 17:50:40 -- scripts/common.sh@391 -- # pt= 00:03:33.707 17:50:40 -- scripts/common.sh@392 -- # return 1 00:03:33.707 17:50:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:33.707 1+0 records in 00:03:33.707 1+0 records out 00:03:33.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665305 s, 158 MB/s 00:03:33.707 17:50:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.707 17:50:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:33.707 17:50:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:33.707 17:50:40 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:33.707 17:50:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:33.965 No valid GPT data, bailing 00:03:33.965 17:50:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:33.965 17:50:40 -- scripts/common.sh@391 -- # pt= 00:03:33.965 17:50:40 -- scripts/common.sh@392 -- # return 1 00:03:33.965 17:50:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:33.965 1+0 records in 00:03:33.965 1+0 records out 00:03:33.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577406 s, 182 MB/s 00:03:33.965 17:50:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.965 17:50:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:33.965 17:50:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:33.965 17:50:40 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:33.965 17:50:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:33.965 No valid GPT data, bailing 00:03:33.965 17:50:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:33.965 17:50:40 -- scripts/common.sh@391 -- # pt= 00:03:33.965 17:50:40 -- scripts/common.sh@392 -- # return 1 00:03:33.965 17:50:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:33.965 1+0 records in 00:03:33.965 1+0 records out 00:03:33.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614193 s, 171 MB/s 00:03:33.965 17:50:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.965 17:50:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:33.965 17:50:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:33.965 17:50:40 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:33.965 17:50:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:33.965 No valid GPT data, bailing 00:03:33.965 17:50:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:33.965 17:50:40 -- scripts/common.sh@391 -- # pt= 00:03:33.965 17:50:40 -- scripts/common.sh@392 -- # return 1 00:03:33.965 17:50:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:33.965 1+0 records in 00:03:33.965 1+0 records out 00:03:33.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600208 s, 175 MB/s 00:03:33.965 17:50:40 -- spdk/autotest.sh@118 -- # sync 00:03:34.224 17:50:40 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:34.224 17:50:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:34.224 17:50:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:36.753 17:50:43 -- spdk/autotest.sh@124 -- # uname -s 00:03:36.753 17:50:43 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:36.754 17:50:43 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:36.754 17:50:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.754 17:50:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.754 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:03:36.754 ************************************ 00:03:36.754 START TEST setup.sh 00:03:36.754 ************************************ 00:03:36.754 17:50:43 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:36.754 * Looking for test storage... 00:03:36.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:36.754 17:50:43 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:36.754 17:50:43 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:36.754 17:50:43 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:36.754 17:50:43 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.754 17:50:43 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.754 17:50:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.754 ************************************ 00:03:36.754 START TEST acl 00:03:36.754 ************************************ 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:36.754 * Looking for test storage... 00:03:36.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:36.754 17:50:43 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:36.754 17:50:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.754 17:50:43 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:36.754 17:50:43 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:36.754 17:50:43 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:36.754 17:50:43 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:36.754 17:50:43 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:36.754 17:50:43 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.754 17:50:43 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.326 17:50:44 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:37.326 17:50:44 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:37.326 17:50:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.326 17:50:44 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:37.326 17:50:44 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.326 17:50:44 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:37.892 17:50:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:37.892 17:50:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:37.892 17:50:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.892 Hugepages 00:03:37.892 node hugesize free / total 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.149 00:03:38.149 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:38.149 17:50:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.149 17:50:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:38.149 17:50:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:38.149 17:50:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:38.149 17:50:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:38.149 17:50:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:38.149 17:50:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.408 17:50:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:38.408 17:50:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:38.408 17:50:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:38.408 17:50:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:38.408 17:50:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:38.408 17:50:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.408 17:50:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:38.408 17:50:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:38.408 17:50:45 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.408 17:50:45 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.408 17:50:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:38.408 ************************************ 00:03:38.408 START TEST denied 00:03:38.408 ************************************ 00:03:38.408 17:50:45 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:38.408 17:50:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:38.408 17:50:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:38.408 17:50:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.408 17:50:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:38.408 17:50:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:39.350 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.350 17:50:46 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.941 00:03:39.941 real 0m1.621s 00:03:39.941 user 0m0.616s 00:03:39.941 sys 0m0.981s 00:03:39.941 17:50:46 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.941 17:50:46 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:39.941 ************************************ 00:03:39.941 END TEST denied 00:03:39.941 ************************************ 00:03:39.941 17:50:46 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:39.941 17:50:46 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.941 17:50:46 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.941 17:50:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.941 ************************************ 00:03:39.941 START TEST allowed 00:03:39.941 ************************************ 00:03:39.941 17:50:46 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:39.941 17:50:46 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:39.941 17:50:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:39.941 17:50:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:39.941 17:50:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.941 17:50:46 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:40.904 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.904 17:50:47 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.846 00:03:41.846 real 0m1.675s 00:03:41.846 user 0m0.717s 00:03:41.846 sys 0m0.969s 00:03:41.846 17:50:48 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.846 17:50:48 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:41.846 ************************************ 00:03:41.846 END TEST allowed 00:03:41.846 ************************************ 00:03:41.846 ************************************ 00:03:41.846 END TEST acl 00:03:41.846 ************************************ 00:03:41.846 00:03:41.846 real 0m5.257s 00:03:41.846 user 0m2.214s 00:03:41.846 sys 0m3.076s 00:03:41.847 17:50:48 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.847 17:50:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.847 17:50:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:41.847 17:50:48 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.847 17:50:48 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.847 17:50:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.847 ************************************ 00:03:41.847 START TEST hugepages 00:03:41.847 ************************************ 00:03:41.847 17:50:48 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:41.847 * Looking for test storage... 00:03:41.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5886756 kB' 'MemAvailable: 7400104 kB' 'Buffers: 2436 kB' 'Cached: 1724692 kB' 'SwapCached: 0 kB' 'Active: 478124 kB' 'Inactive: 1354456 kB' 'Active(anon): 115940 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 107604 kB' 'Mapped: 48668 kB' 'Shmem: 10488 kB' 'KReclaimable: 67296 kB' 'Slab: 143204 kB' 'SReclaimable: 67296 kB' 'SUnreclaim: 75908 kB' 'KernelStack: 6316 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.847 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:41.848 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:41.849 17:50:48 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:41.849 17:50:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.849 17:50:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.849 17:50:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.849 ************************************ 00:03:41.849 START TEST default_setup 00:03:41.849 ************************************ 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.849 17:50:48 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.823 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.823 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7963340 kB' 'MemAvailable: 9476552 kB' 'Buffers: 2436 kB' 'Cached: 1724684 kB' 'SwapCached: 0 kB' 'Active: 494204 kB' 'Inactive: 1354464 kB' 'Active(anon): 132020 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142848 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75844 kB' 'KernelStack: 6336 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.824 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.825 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7966000 kB' 'MemAvailable: 9479220 kB' 'Buffers: 2436 kB' 'Cached: 1724684 kB' 'SwapCached: 0 kB' 'Active: 493804 kB' 'Inactive: 1354472 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122784 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142848 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75844 kB' 'KernelStack: 6336 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.826 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.827 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.828 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7965748 kB' 'MemAvailable: 9478968 kB' 'Buffers: 2436 kB' 'Cached: 1724684 kB' 'SwapCached: 0 kB' 'Active: 494024 kB' 'Inactive: 1354472 kB' 'Active(anon): 131840 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123004 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142848 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75844 kB' 'KernelStack: 6336 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:42.828 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.109 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.110 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.111 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.112 nr_hugepages=1024 00:03:43.112 resv_hugepages=0 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.112 surplus_hugepages=0 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.112 anon_hugepages=0 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7965748 kB' 'MemAvailable: 9478968 kB' 'Buffers: 2436 kB' 'Cached: 1724684 kB' 'SwapCached: 0 kB' 'Active: 493800 kB' 'Inactive: 1354472 kB' 'Active(anon): 131616 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122784 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142848 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75844 kB' 'KernelStack: 6336 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.112 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.113 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.114 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7966000 kB' 'MemUsed: 4275976 kB' 'SwapCached: 0 kB' 'Active: 493968 kB' 'Inactive: 1354472 kB' 'Active(anon): 131784 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1727120 kB' 'Mapped: 48716 kB' 'AnonPages: 123016 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67004 kB' 'Slab: 142848 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.115 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.116 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.117 node0=1024 expecting 1024 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.117 00:03:43.117 real 0m1.125s 00:03:43.117 user 0m0.492s 00:03:43.117 sys 0m0.614s 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.117 17:50:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:43.117 ************************************ 00:03:43.117 END TEST default_setup 00:03:43.117 ************************************ 00:03:43.117 17:50:49 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:43.117 17:50:49 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.117 17:50:49 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.117 17:50:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.117 ************************************ 00:03:43.117 START TEST per_node_1G_alloc 00:03:43.117 ************************************ 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:43.117 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:43.118 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.118 17:50:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.396 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.396 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9011604 kB' 'MemAvailable: 10524824 kB' 'Buffers: 2436 kB' 'Cached: 1724684 kB' 'SwapCached: 0 kB' 'Active: 494460 kB' 'Inactive: 1354472 kB' 'Active(anon): 132276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123456 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142852 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75848 kB' 'KernelStack: 6452 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.698 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9011604 kB' 'MemAvailable: 10524820 kB' 'Buffers: 2436 kB' 'Cached: 1724680 kB' 'SwapCached: 0 kB' 'Active: 493988 kB' 'Inactive: 1354468 kB' 'Active(anon): 131804 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142836 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75832 kB' 'KernelStack: 6380 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.700 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9011604 kB' 'MemAvailable: 10524820 kB' 'Buffers: 2436 kB' 'Cached: 1724680 kB' 'SwapCached: 0 kB' 'Active: 493684 kB' 'Inactive: 1354468 kB' 'Active(anon): 131500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122624 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142836 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75832 kB' 'KernelStack: 6332 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.703 nr_hugepages=512 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:43.703 resv_hugepages=0 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.703 surplus_hugepages=0 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.703 anon_hugepages=0 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9011604 kB' 'MemAvailable: 10524820 kB' 'Buffers: 2436 kB' 'Cached: 1724680 kB' 'SwapCached: 0 kB' 'Active: 493832 kB' 'Inactive: 1354468 kB' 'Active(anon): 131648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122748 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142836 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75832 kB' 'KernelStack: 6300 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9011104 kB' 'MemUsed: 3230872 kB' 'SwapCached: 0 kB' 'Active: 493844 kB' 'Inactive: 1354468 kB' 'Active(anon): 131660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1727116 kB' 'Mapped: 48716 kB' 'AnonPages: 122776 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67004 kB' 'Slab: 142836 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.706 node0=512 expecting 512 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.706 00:03:43.706 real 0m0.608s 00:03:43.706 user 0m0.277s 00:03:43.706 sys 0m0.370s 00:03:43.706 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.707 17:50:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.707 ************************************ 00:03:43.707 END TEST per_node_1G_alloc 00:03:43.707 ************************************ 00:03:43.707 17:50:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:43.707 17:50:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.707 17:50:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.707 17:50:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.707 ************************************ 00:03:43.707 START TEST even_2G_alloc 00:03:43.707 ************************************ 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.707 17:50:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.279 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.279 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7961020 kB' 'MemAvailable: 9474244 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 494112 kB' 'Inactive: 1354476 kB' 'Active(anon): 131928 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123076 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142728 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75724 kB' 'KernelStack: 6360 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.280 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7960768 kB' 'MemAvailable: 9473992 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 493868 kB' 'Inactive: 1354476 kB' 'Active(anon): 131684 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123076 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142692 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75688 kB' 'KernelStack: 6320 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.281 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.282 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7960768 kB' 'MemAvailable: 9473992 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 493780 kB' 'Inactive: 1354476 kB' 'Active(anon): 131596 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142692 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75688 kB' 'KernelStack: 6320 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.283 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.285 nr_hugepages=1024 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.285 resv_hugepages=0 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.285 surplus_hugepages=0 00:03:44.285 anon_hugepages=0 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7960768 kB' 'MemAvailable: 9473992 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 493744 kB' 'Inactive: 1354476 kB' 'Active(anon): 131560 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142688 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75684 kB' 'KernelStack: 6304 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.285 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.286 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7960768 kB' 'MemUsed: 4281208 kB' 'SwapCached: 0 kB' 'Active: 493792 kB' 'Inactive: 1354476 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1727124 kB' 'Mapped: 48716 kB' 'AnonPages: 123052 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67004 kB' 'Slab: 142688 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.287 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.288 node0=1024 expecting 1024 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.288 00:03:44.288 real 0m0.600s 00:03:44.288 user 0m0.269s 00:03:44.288 sys 0m0.364s 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.288 17:50:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.288 ************************************ 00:03:44.288 END TEST even_2G_alloc 00:03:44.288 ************************************ 00:03:44.288 17:50:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.288 17:50:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.288 17:50:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.288 17:50:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.288 ************************************ 00:03:44.288 START TEST odd_alloc 00:03:44.288 ************************************ 00:03:44.288 17:50:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:44.288 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.288 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.288 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.288 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.289 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.861 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.861 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7958352 kB' 'MemAvailable: 9471576 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 494392 kB' 'Inactive: 1354476 kB' 'Active(anon): 132208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123340 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142708 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75704 kB' 'KernelStack: 6328 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.861 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.862 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7958352 kB' 'MemAvailable: 9471576 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 494100 kB' 'Inactive: 1354476 kB' 'Active(anon): 131916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123092 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142708 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75704 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.863 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.864 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7958352 kB' 'MemAvailable: 9471576 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 493844 kB' 'Inactive: 1354476 kB' 'Active(anon): 131660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122832 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142696 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75692 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.865 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.866 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:44.867 nr_hugepages=1025 00:03:44.867 resv_hugepages=0 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.867 surplus_hugepages=0 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.867 anon_hugepages=0 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7959392 kB' 'MemAvailable: 9472616 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 494056 kB' 'Inactive: 1354476 kB' 'Active(anon): 131872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142696 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75692 kB' 'KernelStack: 6336 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.867 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.868 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7959392 kB' 'MemUsed: 4282584 kB' 'SwapCached: 0 kB' 'Active: 494008 kB' 'Inactive: 1354476 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1727124 kB' 'Mapped: 48716 kB' 'AnonPages: 122952 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67004 kB' 'Slab: 142696 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.869 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.870 node0=1025 expecting 1025 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:44.870 00:03:44.870 real 0m0.562s 00:03:44.870 user 0m0.267s 00:03:44.870 sys 0m0.337s 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.870 17:50:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.870 ************************************ 00:03:44.870 END TEST odd_alloc 00:03:44.870 ************************************ 00:03:44.870 17:50:51 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:44.870 17:50:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.870 17:50:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.870 17:50:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.129 ************************************ 00:03:45.129 START TEST custom_alloc 00:03:45.129 ************************************ 00:03:45.129 17:50:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:45.129 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.130 17:50:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.391 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.391 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9009536 kB' 'MemAvailable: 10522760 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 494184 kB' 'Inactive: 1354476 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142732 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75728 kB' 'KernelStack: 6408 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.391 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.392 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9009796 kB' 'MemAvailable: 10523020 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 493868 kB' 'Inactive: 1354476 kB' 'Active(anon): 131684 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123088 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142736 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75732 kB' 'KernelStack: 6332 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.393 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.394 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9009796 kB' 'MemAvailable: 10523020 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 493892 kB' 'Inactive: 1354476 kB' 'Active(anon): 131708 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122876 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142736 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75732 kB' 'KernelStack: 6348 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.656 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.657 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:45.658 nr_hugepages=512 00:03:45.658 resv_hugepages=0 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.658 surplus_hugepages=0 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.658 anon_hugepages=0 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9009796 kB' 'MemAvailable: 10523020 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 494132 kB' 'Inactive: 1354476 kB' 'Active(anon): 131948 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123116 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 67004 kB' 'Slab: 142736 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75732 kB' 'KernelStack: 6332 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.658 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.659 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9009796 kB' 'MemUsed: 3232180 kB' 'SwapCached: 0 kB' 'Active: 494264 kB' 'Inactive: 1354476 kB' 'Active(anon): 132080 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1727124 kB' 'Mapped: 49012 kB' 'AnonPages: 123340 kB' 'Shmem: 10464 kB' 'KernelStack: 6364 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67004 kB' 'Slab: 142732 kB' 'SReclaimable: 67004 kB' 'SUnreclaim: 75728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.660 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.662 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.663 node0=512 expecting 512 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.663 00:03:45.663 real 0m0.609s 00:03:45.663 user 0m0.262s 00:03:45.663 sys 0m0.356s 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.663 17:50:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.663 ************************************ 00:03:45.663 END TEST custom_alloc 00:03:45.663 ************************************ 00:03:45.663 17:50:52 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:45.663 17:50:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.663 17:50:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.663 17:50:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.663 ************************************ 00:03:45.663 START TEST no_shrink_alloc 00:03:45.663 ************************************ 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.663 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.921 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.921 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7971916 kB' 'MemAvailable: 9485120 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 489604 kB' 'Inactive: 1354476 kB' 'Active(anon): 127420 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118572 kB' 'Mapped: 48124 kB' 'Shmem: 10464 kB' 'KReclaimable: 66964 kB' 'Slab: 142400 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75436 kB' 'KernelStack: 6196 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7971916 kB' 'MemAvailable: 9485120 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 489152 kB' 'Inactive: 1354476 kB' 'Active(anon): 126968 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118076 kB' 'Mapped: 48124 kB' 'Shmem: 10464 kB' 'KReclaimable: 66964 kB' 'Slab: 142400 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75436 kB' 'KernelStack: 6148 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.186 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.187 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7971916 kB' 'MemAvailable: 9485120 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 489272 kB' 'Inactive: 1354476 kB' 'Active(anon): 127088 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118204 kB' 'Mapped: 48124 kB' 'Shmem: 10464 kB' 'KReclaimable: 66964 kB' 'Slab: 142400 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75436 kB' 'KernelStack: 6148 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.188 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.190 nr_hugepages=1024 00:03:46.190 resv_hugepages=0 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.190 surplus_hugepages=0 00:03:46.190 anon_hugepages=0 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7971916 kB' 'MemAvailable: 9485120 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 489132 kB' 'Inactive: 1354476 kB' 'Active(anon): 126948 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118132 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 66964 kB' 'Slab: 142316 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75352 kB' 'KernelStack: 6192 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.190 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.191 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7971916 kB' 'MemUsed: 4270060 kB' 'SwapCached: 0 kB' 'Active: 489088 kB' 'Inactive: 1354476 kB' 'Active(anon): 126904 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1727124 kB' 'Mapped: 47984 kB' 'AnonPages: 118072 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66964 kB' 'Slab: 142312 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.193 node0=1024 expecting 1024 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.193 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.765 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.765 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.765 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.765 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969404 kB' 'MemAvailable: 9482604 kB' 'Buffers: 2436 kB' 'Cached: 1724684 kB' 'SwapCached: 0 kB' 'Active: 489900 kB' 'Inactive: 1354472 kB' 'Active(anon): 127716 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118900 kB' 'Mapped: 48108 kB' 'Shmem: 10464 kB' 'KReclaimable: 66964 kB' 'Slab: 142260 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75296 kB' 'KernelStack: 6244 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.766 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969152 kB' 'MemAvailable: 9482356 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 489264 kB' 'Inactive: 1354476 kB' 'Active(anon): 127080 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118192 kB' 'Mapped: 47976 kB' 'Shmem: 10464 kB' 'KReclaimable: 66964 kB' 'Slab: 142280 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75316 kB' 'KernelStack: 6224 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.769 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969152 kB' 'MemAvailable: 9482356 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 489040 kB' 'Inactive: 1354476 kB' 'Active(anon): 126856 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 117968 kB' 'Mapped: 47976 kB' 'Shmem: 10464 kB' 'KReclaimable: 66964 kB' 'Slab: 142280 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75316 kB' 'KernelStack: 6208 kB' 'PageTables: 3676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.770 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.772 nr_hugepages=1024 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.772 resv_hugepages=0 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.772 surplus_hugepages=0 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.772 anon_hugepages=0 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969152 kB' 'MemAvailable: 9482356 kB' 'Buffers: 2436 kB' 'Cached: 1724688 kB' 'SwapCached: 0 kB' 'Active: 489300 kB' 'Inactive: 1354476 kB' 'Active(anon): 127116 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118228 kB' 'Mapped: 47976 kB' 'Shmem: 10464 kB' 'KReclaimable: 66964 kB' 'Slab: 142280 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75316 kB' 'KernelStack: 6208 kB' 'PageTables: 3676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969152 kB' 'MemUsed: 4272824 kB' 'SwapCached: 0 kB' 'Active: 489232 kB' 'Inactive: 1354476 kB' 'Active(anon): 127048 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1727124 kB' 'Mapped: 47976 kB' 'AnonPages: 118204 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66964 kB' 'Slab: 142276 kB' 'SReclaimable: 66964 kB' 'SUnreclaim: 75312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.776 node0=1024 expecting 1024 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.776 00:03:46.776 real 0m1.137s 00:03:46.776 user 0m0.548s 00:03:46.776 sys 0m0.666s 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.776 17:50:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.776 ************************************ 00:03:46.776 END TEST no_shrink_alloc 00:03:46.776 ************************************ 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.776 17:50:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.776 00:03:46.776 real 0m5.102s 00:03:46.776 user 0m2.267s 00:03:46.776 sys 0m3.019s 00:03:46.776 17:50:53 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.776 17:50:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.776 ************************************ 00:03:46.776 END TEST hugepages 00:03:46.776 ************************************ 00:03:47.035 17:50:53 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.035 17:50:53 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.035 17:50:53 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.035 17:50:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:47.035 ************************************ 00:03:47.035 START TEST driver 00:03:47.035 ************************************ 00:03:47.036 17:50:53 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.036 * Looking for test storage... 00:03:47.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:47.036 17:50:53 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:47.036 17:50:53 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.036 17:50:53 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.601 17:50:54 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:47.601 17:50:54 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.601 17:50:54 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.601 17:50:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:47.601 ************************************ 00:03:47.601 START TEST guess_driver 00:03:47.601 ************************************ 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:47.601 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:47.601 Looking for driver=uio_pci_generic 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.601 17:50:54 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.536 17:50:55 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.472 00:03:49.472 real 0m1.622s 00:03:49.472 user 0m0.586s 00:03:49.472 sys 0m1.091s 00:03:49.472 17:50:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.472 ************************************ 00:03:49.472 17:50:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.472 END TEST guess_driver 00:03:49.472 ************************************ 00:03:49.472 00:03:49.472 real 0m2.446s 00:03:49.472 user 0m0.857s 00:03:49.472 sys 0m1.720s 00:03:49.472 17:50:56 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.472 17:50:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.472 ************************************ 00:03:49.472 END TEST driver 00:03:49.472 ************************************ 00:03:49.472 17:50:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:49.472 17:50:56 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.472 17:50:56 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.472 17:50:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.472 ************************************ 00:03:49.472 START TEST devices 00:03:49.472 ************************************ 00:03:49.472 17:50:56 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:49.472 * Looking for test storage... 00:03:49.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:49.472 17:50:56 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:49.472 17:50:56 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:49.472 17:50:56 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.472 17:50:56 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:50.408 17:50:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.408 No valid GPT data, bailing 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.408 17:50:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.408 17:50:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.408 17:50:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.408 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:50.408 No valid GPT data, bailing 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:50.408 17:50:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.409 17:50:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:50.409 17:50:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:50.409 17:50:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:50.409 17:50:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.409 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:50.409 17:50:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:50.409 17:50:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:50.409 No valid GPT data, bailing 00:03:50.409 17:50:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:50.668 17:50:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.668 17:50:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:50.668 17:50:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:50.668 17:50:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:50.668 17:50:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:50.668 17:50:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:50.668 17:50:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:50.668 No valid GPT data, bailing 00:03:50.668 17:50:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:50.668 17:50:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.668 17:50:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:50.668 17:50:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:50.668 17:50:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:50.668 17:50:57 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:50.668 17:50:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:50.668 17:50:57 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.668 17:50:57 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.668 17:50:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.668 ************************************ 00:03:50.668 START TEST nvme_mount 00:03:50.668 ************************************ 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.668 17:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.604 Creating new GPT entries in memory. 00:03:51.604 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.604 other utilities. 00:03:51.604 17:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.604 17:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.604 17:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.604 17:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.604 17:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:52.980 Creating new GPT entries in memory. 00:03:52.980 The operation has completed successfully. 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58880 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.980 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.238 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.238 17:50:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:53.238 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.238 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.495 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.495 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.495 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.495 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.495 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:53.495 17:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:53.495 17:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.495 17:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:53.495 17:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:53.495 17:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.495 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.798 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.056 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.056 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.056 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.056 17:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.315 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.316 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.316 17:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.316 17:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.574 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.574 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:54.574 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.574 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.574 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.574 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.833 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.833 00:03:54.833 real 0m4.303s 00:03:54.833 user 0m0.796s 00:03:54.833 sys 0m1.291s 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.833 ************************************ 00:03:54.833 END TEST nvme_mount 00:03:54.833 17:51:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:54.833 ************************************ 00:03:55.092 17:51:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:55.092 17:51:01 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.092 17:51:01 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.092 17:51:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.092 ************************************ 00:03:55.092 START TEST dm_mount 00:03:55.092 ************************************ 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:55.092 17:51:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:56.028 Creating new GPT entries in memory. 00:03:56.028 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:56.028 other utilities. 00:03:56.028 17:51:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:56.028 17:51:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.028 17:51:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.028 17:51:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.028 17:51:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:56.961 Creating new GPT entries in memory. 00:03:56.961 The operation has completed successfully. 00:03:56.961 17:51:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:56.961 17:51:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.961 17:51:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.961 17:51:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.961 17:51:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:58.337 The operation has completed successfully. 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59322 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:58.337 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:58.338 17:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.338 17:51:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:58.338 17:51:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.338 17:51:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.338 17:51:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:58.338 17:51:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.338 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.600 17:51:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.858 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.858 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:58.858 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.858 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.858 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.858 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.117 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.117 17:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.117 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:59.375 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.375 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:59.375 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.375 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.375 17:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:59.375 00:03:59.375 real 0m4.309s 00:03:59.375 user 0m0.474s 00:03:59.375 sys 0m0.801s 00:03:59.375 17:51:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.375 17:51:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.375 ************************************ 00:03:59.375 END TEST dm_mount 00:03:59.375 ************************************ 00:03:59.375 17:51:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:59.375 17:51:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:59.375 17:51:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.375 17:51:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.375 17:51:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:59.375 17:51:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.375 17:51:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.636 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:59.636 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:59.636 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:59.636 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:59.636 17:51:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:59.636 17:51:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.636 17:51:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.636 17:51:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.636 17:51:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.636 17:51:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.636 17:51:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:59.636 00:03:59.636 real 0m10.253s 00:03:59.636 user 0m1.916s 00:03:59.636 sys 0m2.816s 00:03:59.636 ************************************ 00:03:59.636 END TEST devices 00:03:59.636 ************************************ 00:03:59.636 17:51:06 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.636 17:51:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.636 00:03:59.636 real 0m23.368s 00:03:59.636 user 0m7.366s 00:03:59.636 sys 0m10.835s 00:03:59.636 17:51:06 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.636 17:51:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.636 ************************************ 00:03:59.636 END TEST setup.sh 00:03:59.636 ************************************ 00:03:59.636 17:51:06 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.570 Hugepages 00:04:00.570 node hugesize free / total 00:04:00.570 node0 1048576kB 0 / 0 00:04:00.570 node0 2048kB 2048 / 2048 00:04:00.570 00:04:00.570 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.570 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:00.570 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:00.828 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:00.828 17:51:07 -- spdk/autotest.sh@130 -- # uname -s 00:04:00.828 17:51:07 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:00.828 17:51:07 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:00.828 17:51:07 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.394 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.651 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.651 17:51:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:02.667 17:51:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:02.667 17:51:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:02.667 17:51:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.667 17:51:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:02.667 17:51:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:02.667 17:51:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:02.667 17:51:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.667 17:51:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.667 17:51:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:02.667 17:51:09 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:02.667 17:51:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.667 17:51:09 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.182 Waiting for block devices as requested 00:04:03.182 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.182 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.440 17:51:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:03.440 17:51:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:03.440 17:51:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.440 17:51:10 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:03.440 17:51:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.440 17:51:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:03.440 17:51:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.440 17:51:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:03.440 17:51:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:03.440 17:51:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:03.440 17:51:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:03.440 17:51:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:03.440 17:51:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:03.440 17:51:10 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:03.440 17:51:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:03.440 17:51:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:03.440 17:51:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:03.440 17:51:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:03.440 17:51:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:03.440 17:51:10 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:03.440 17:51:10 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:03.440 17:51:10 -- common/autotest_common.sh@1557 -- # continue 00:04:03.440 17:51:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:03.440 17:51:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:03.440 17:51:10 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:03.440 17:51:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.440 17:51:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.440 17:51:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:03.440 17:51:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.440 17:51:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:03.440 17:51:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:03.440 17:51:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:03.440 17:51:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:03.440 17:51:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:03.440 17:51:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:03.440 17:51:10 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:03.440 17:51:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:03.440 17:51:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:03.440 17:51:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:03.440 17:51:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:03.440 17:51:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:03.440 17:51:10 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:03.440 17:51:10 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:03.441 17:51:10 -- common/autotest_common.sh@1557 -- # continue 00:04:03.441 17:51:10 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:03.441 17:51:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.441 17:51:10 -- common/autotest_common.sh@10 -- # set +x 00:04:03.441 17:51:10 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:03.441 17:51:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.441 17:51:10 -- common/autotest_common.sh@10 -- # set +x 00:04:03.441 17:51:10 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.332 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.332 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.332 17:51:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:04.332 17:51:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.332 17:51:11 -- common/autotest_common.sh@10 -- # set +x 00:04:04.332 17:51:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:04.332 17:51:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:04.332 17:51:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:04.332 17:51:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:04.332 17:51:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:04.332 17:51:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:04.332 17:51:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:04.332 17:51:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:04.332 17:51:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.332 17:51:11 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.332 17:51:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:04.590 17:51:11 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:04.590 17:51:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.590 17:51:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:04.590 17:51:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:04.590 17:51:11 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:04.590 17:51:11 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.590 17:51:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:04.590 17:51:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:04.590 17:51:11 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:04.590 17:51:11 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.590 17:51:11 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:04.590 17:51:11 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:04.590 17:51:11 -- common/autotest_common.sh@1593 -- # return 0 00:04:04.590 17:51:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:04.590 17:51:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:04.590 17:51:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.590 17:51:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.590 17:51:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:04.590 17:51:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.590 17:51:11 -- common/autotest_common.sh@10 -- # set +x 00:04:04.590 17:51:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:04.590 17:51:11 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.590 17:51:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.590 17:51:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.590 17:51:11 -- common/autotest_common.sh@10 -- # set +x 00:04:04.590 ************************************ 00:04:04.590 START TEST env 00:04:04.590 ************************************ 00:04:04.590 17:51:11 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.590 * Looking for test storage... 00:04:04.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:04.590 17:51:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.590 17:51:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.590 17:51:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.590 17:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.590 ************************************ 00:04:04.590 START TEST env_memory 00:04:04.590 ************************************ 00:04:04.590 17:51:11 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.590 00:04:04.590 00:04:04.590 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.590 http://cunit.sourceforge.net/ 00:04:04.590 00:04:04.590 00:04:04.590 Suite: memory 00:04:04.590 Test: alloc and free memory map ...[2024-07-24 17:51:11.555006] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.847 passed 00:04:04.847 Test: mem map translation ...[2024-07-24 17:51:11.590066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.847 [2024-07-24 17:51:11.590136] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.847 [2024-07-24 17:51:11.590199] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.847 [2024-07-24 17:51:11.590212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.847 passed 00:04:04.847 Test: mem map registration ...[2024-07-24 17:51:11.656728] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:04.847 [2024-07-24 17:51:11.656782] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:04.847 passed 00:04:04.847 Test: mem map adjacent registrations ...passed 00:04:04.847 00:04:04.847 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.847 suites 1 1 n/a 0 0 00:04:04.847 tests 4 4 4 0 0 00:04:04.847 asserts 152 152 152 0 n/a 00:04:04.847 00:04:04.847 Elapsed time = 0.217 seconds 00:04:04.847 00:04:04.847 real 0m0.240s 00:04:04.847 user 0m0.215s 00:04:04.847 sys 0m0.016s 00:04:04.847 17:51:11 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.847 17:51:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.847 ************************************ 00:04:04.847 END TEST env_memory 00:04:04.847 ************************************ 00:04:04.847 17:51:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.847 17:51:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.847 17:51:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.847 17:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.847 ************************************ 00:04:04.847 START TEST env_vtophys 00:04:04.847 ************************************ 00:04:04.847 17:51:11 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.847 EAL: lib.eal log level changed from notice to debug 00:04:04.847 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.847 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.847 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.847 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.847 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.848 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.848 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.848 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.848 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.848 EAL: Detected lcore 9 as core 0 on socket 0 00:04:05.111 EAL: Maximum logical cores by configuration: 128 00:04:05.111 EAL: Detected CPU lcores: 10 00:04:05.111 EAL: Detected NUMA nodes: 1 00:04:05.111 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:05.111 EAL: Detected shared linkage of DPDK 00:04:05.111 EAL: No shared files mode enabled, IPC will be disabled 00:04:05.111 EAL: Selected IOVA mode 'PA' 00:04:05.111 EAL: Probing VFIO support... 00:04:05.111 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.111 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:05.111 EAL: Ask a virtual area of 0x2e000 bytes 00:04:05.111 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:05.111 EAL: Setting up physically contiguous memory... 00:04:05.111 EAL: Setting maximum number of open files to 524288 00:04:05.111 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:05.111 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:05.111 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.111 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:05.111 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.111 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.111 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:05.111 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:05.111 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.111 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:05.111 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.111 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.111 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:05.111 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:05.111 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.111 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:05.111 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.111 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.111 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:05.111 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:05.112 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.112 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:05.112 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.112 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.112 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:05.112 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:05.112 EAL: Hugepages will be freed exactly as allocated. 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: TSC frequency is ~2100000 KHz 00:04:05.112 EAL: Main lcore 0 is ready (tid=7fbdf126aa00;cpuset=[0]) 00:04:05.112 EAL: Trying to obtain current memory policy. 00:04:05.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.112 EAL: Restoring previous memory policy: 0 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was expanded by 2MB 00:04:05.112 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.112 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:05.112 EAL: Mem event callback 'spdk:(nil)' registered 00:04:05.112 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:05.112 00:04:05.112 00:04:05.112 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.112 http://cunit.sourceforge.net/ 00:04:05.112 00:04:05.112 00:04:05.112 Suite: components_suite 00:04:05.112 Test: vtophys_malloc_test ...passed 00:04:05.112 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.112 EAL: Restoring previous memory policy: 4 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.112 EAL: Trying to obtain current memory policy. 00:04:05.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.112 EAL: Restoring previous memory policy: 4 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.112 EAL: Trying to obtain current memory policy. 00:04:05.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.112 EAL: Restoring previous memory policy: 4 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.112 EAL: Trying to obtain current memory policy. 00:04:05.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.112 EAL: Restoring previous memory policy: 4 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.112 EAL: Trying to obtain current memory policy. 00:04:05.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.112 EAL: Restoring previous memory policy: 4 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was shrunk by 34MB 00:04:05.112 EAL: Trying to obtain current memory policy. 00:04:05.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.112 EAL: Restoring previous memory policy: 4 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.112 EAL: Trying to obtain current memory policy. 00:04:05.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.112 EAL: Restoring previous memory policy: 4 00:04:05.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.112 EAL: request: mp_malloc_sync 00:04:05.112 EAL: No shared files mode enabled, IPC is disabled 00:04:05.112 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.377 EAL: request: mp_malloc_sync 00:04:05.377 EAL: No shared files mode enabled, IPC is disabled 00:04:05.377 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.377 EAL: Trying to obtain current memory policy. 00:04:05.377 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.377 EAL: Restoring previous memory policy: 4 00:04:05.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.377 EAL: request: mp_malloc_sync 00:04:05.377 EAL: No shared files mode enabled, IPC is disabled 00:04:05.377 EAL: Heap on socket 0 was expanded by 258MB 00:04:05.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.377 EAL: request: mp_malloc_sync 00:04:05.377 EAL: No shared files mode enabled, IPC is disabled 00:04:05.377 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.377 EAL: Trying to obtain current memory policy. 00:04:05.377 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.635 EAL: Restoring previous memory policy: 4 00:04:05.635 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.635 EAL: request: mp_malloc_sync 00:04:05.635 EAL: No shared files mode enabled, IPC is disabled 00:04:05.635 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.635 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.635 EAL: request: mp_malloc_sync 00:04:05.635 EAL: No shared files mode enabled, IPC is disabled 00:04:05.635 EAL: Heap on socket 0 was shrunk by 514MB 00:04:05.635 EAL: Trying to obtain current memory policy. 00:04:05.635 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.893 EAL: Restoring previous memory policy: 4 00:04:05.893 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.893 EAL: request: mp_malloc_sync 00:04:05.893 EAL: No shared files mode enabled, IPC is disabled 00:04:05.893 EAL: Heap on socket 0 was expanded by 1026MB 00:04:06.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.150 passed 00:04:06.150 00:04:06.150 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.150 suites 1 1 n/a 0 0 00:04:06.150 tests 2 2 2 0 0 00:04:06.150 asserts 5344 5344 5344 0 n/a 00:04:06.150 00:04:06.150 Elapsed time = 1.051 seconds 00:04:06.150 EAL: request: mp_malloc_sync 00:04:06.150 EAL: No shared files mode enabled, IPC is disabled 00:04:06.150 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:06.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.150 EAL: request: mp_malloc_sync 00:04:06.150 EAL: No shared files mode enabled, IPC is disabled 00:04:06.150 EAL: Heap on socket 0 was shrunk by 2MB 00:04:06.150 EAL: No shared files mode enabled, IPC is disabled 00:04:06.150 EAL: No shared files mode enabled, IPC is disabled 00:04:06.150 EAL: No shared files mode enabled, IPC is disabled 00:04:06.150 ************************************ 00:04:06.150 END TEST env_vtophys 00:04:06.150 ************************************ 00:04:06.150 00:04:06.150 real 0m1.256s 00:04:06.151 user 0m0.659s 00:04:06.151 sys 0m0.463s 00:04:06.151 17:51:13 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.151 17:51:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:06.151 17:51:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:06.151 17:51:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.151 17:51:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.151 17:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.151 ************************************ 00:04:06.151 START TEST env_pci 00:04:06.151 ************************************ 00:04:06.151 17:51:13 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:06.409 00:04:06.409 00:04:06.409 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.409 http://cunit.sourceforge.net/ 00:04:06.409 00:04:06.409 00:04:06.409 Suite: pci 00:04:06.409 Test: pci_hook ...[2024-07-24 17:51:13.131178] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60522 has claimed it 00:04:06.409 passed 00:04:06.409 00:04:06.409 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.409 suites 1 1 n/a 0 0 00:04:06.409 tests 1 1 1 0 0 00:04:06.409 asserts 25 25 25 0 n/a 00:04:06.409 00:04:06.409 Elapsed time = 0.003 seconds 00:04:06.409 EAL: Cannot find device (10000:00:01.0) 00:04:06.409 EAL: Failed to attach device on primary process 00:04:06.409 00:04:06.409 real 0m0.024s 00:04:06.409 user 0m0.009s 00:04:06.409 sys 0m0.014s 00:04:06.409 17:51:13 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.409 17:51:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:06.409 ************************************ 00:04:06.409 END TEST env_pci 00:04:06.409 ************************************ 00:04:06.410 17:51:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:06.410 17:51:13 env -- env/env.sh@15 -- # uname 00:04:06.410 17:51:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:06.410 17:51:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:06.410 17:51:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.410 17:51:13 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:06.410 17:51:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.410 17:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.410 ************************************ 00:04:06.410 START TEST env_dpdk_post_init 00:04:06.410 ************************************ 00:04:06.410 17:51:13 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.410 EAL: Detected CPU lcores: 10 00:04:06.410 EAL: Detected NUMA nodes: 1 00:04:06.410 EAL: Detected shared linkage of DPDK 00:04:06.410 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.410 EAL: Selected IOVA mode 'PA' 00:04:06.410 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.667 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:06.667 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:06.668 Starting DPDK initialization... 00:04:06.668 Starting SPDK post initialization... 00:04:06.668 SPDK NVMe probe 00:04:06.668 Attaching to 0000:00:10.0 00:04:06.668 Attaching to 0000:00:11.0 00:04:06.668 Attached to 0000:00:10.0 00:04:06.668 Attached to 0000:00:11.0 00:04:06.668 Cleaning up... 00:04:06.668 00:04:06.668 real 0m0.199s 00:04:06.668 user 0m0.046s 00:04:06.668 sys 0m0.053s 00:04:06.668 17:51:13 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.668 ************************************ 00:04:06.668 END TEST env_dpdk_post_init 00:04:06.668 ************************************ 00:04:06.668 17:51:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.668 17:51:13 env -- env/env.sh@26 -- # uname 00:04:06.668 17:51:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.668 17:51:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.668 17:51:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.668 17:51:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.668 17:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.668 ************************************ 00:04:06.668 START TEST env_mem_callbacks 00:04:06.668 ************************************ 00:04:06.668 17:51:13 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.668 EAL: Detected CPU lcores: 10 00:04:06.668 EAL: Detected NUMA nodes: 1 00:04:06.668 EAL: Detected shared linkage of DPDK 00:04:06.668 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.668 EAL: Selected IOVA mode 'PA' 00:04:06.668 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.668 00:04:06.668 00:04:06.668 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.668 http://cunit.sourceforge.net/ 00:04:06.668 00:04:06.668 00:04:06.668 Suite: memory 00:04:06.668 Test: test ... 00:04:06.668 register 0x200000200000 2097152 00:04:06.668 malloc 3145728 00:04:06.668 register 0x200000400000 4194304 00:04:06.668 buf 0x200000500000 len 3145728 PASSED 00:04:06.668 malloc 64 00:04:06.668 buf 0x2000004fff40 len 64 PASSED 00:04:06.668 malloc 4194304 00:04:06.668 register 0x200000800000 6291456 00:04:06.668 buf 0x200000a00000 len 4194304 PASSED 00:04:06.668 free 0x200000500000 3145728 00:04:06.668 free 0x2000004fff40 64 00:04:06.668 unregister 0x200000400000 4194304 PASSED 00:04:06.668 free 0x200000a00000 4194304 00:04:06.668 unregister 0x200000800000 6291456 PASSED 00:04:06.668 malloc 8388608 00:04:06.668 register 0x200000400000 10485760 00:04:06.668 buf 0x200000600000 len 8388608 PASSED 00:04:06.668 free 0x200000600000 8388608 00:04:06.668 unregister 0x200000400000 10485760 PASSED 00:04:06.668 passed 00:04:06.668 00:04:06.668 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.668 suites 1 1 n/a 0 0 00:04:06.668 tests 1 1 1 0 0 00:04:06.668 asserts 15 15 15 0 n/a 00:04:06.668 00:04:06.668 Elapsed time = 0.007 seconds 00:04:06.668 00:04:06.668 real 0m0.151s 00:04:06.668 user 0m0.017s 00:04:06.668 sys 0m0.031s 00:04:06.668 17:51:13 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.668 ************************************ 00:04:06.668 END TEST env_mem_callbacks 00:04:06.668 ************************************ 00:04:06.668 17:51:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:06.925 00:04:06.925 real 0m2.255s 00:04:06.925 user 0m1.079s 00:04:06.925 sys 0m0.828s 00:04:06.925 17:51:13 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.925 17:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.925 ************************************ 00:04:06.925 END TEST env 00:04:06.925 ************************************ 00:04:06.925 17:51:13 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:06.925 17:51:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.925 17:51:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.925 17:51:13 -- common/autotest_common.sh@10 -- # set +x 00:04:06.925 ************************************ 00:04:06.925 START TEST rpc 00:04:06.925 ************************************ 00:04:06.925 17:51:13 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:06.925 * Looking for test storage... 00:04:06.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.925 17:51:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60631 00:04:06.925 17:51:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.925 17:51:13 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:06.925 17:51:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60631 00:04:06.925 17:51:13 rpc -- common/autotest_common.sh@831 -- # '[' -z 60631 ']' 00:04:06.925 17:51:13 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.925 17:51:13 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:06.925 17:51:13 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.925 17:51:13 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:06.925 17:51:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.925 [2024-07-24 17:51:13.884173] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:06.925 [2024-07-24 17:51:13.884547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60631 ] 00:04:07.183 [2024-07-24 17:51:14.028365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.440 [2024-07-24 17:51:14.160238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:07.440 [2024-07-24 17:51:14.160323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60631' to capture a snapshot of events at runtime. 00:04:07.440 [2024-07-24 17:51:14.160338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:07.440 [2024-07-24 17:51:14.160352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:07.440 [2024-07-24 17:51:14.160363] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60631 for offline analysis/debug. 00:04:07.440 [2024-07-24 17:51:14.160411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.005 17:51:14 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.005 17:51:14 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:08.005 17:51:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.006 17:51:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.006 17:51:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.006 17:51:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.006 17:51:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.006 17:51:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.006 17:51:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.006 ************************************ 00:04:08.006 START TEST rpc_integrity 00:04:08.006 ************************************ 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.006 17:51:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.006 { 00:04:08.006 "aliases": [ 00:04:08.006 "41e0f1d6-1df2-4da6-805d-9f263cb00acb" 00:04:08.006 ], 00:04:08.006 "assigned_rate_limits": { 00:04:08.006 "r_mbytes_per_sec": 0, 00:04:08.006 "rw_ios_per_sec": 0, 00:04:08.006 "rw_mbytes_per_sec": 0, 00:04:08.006 "w_mbytes_per_sec": 0 00:04:08.006 }, 00:04:08.006 "block_size": 512, 00:04:08.006 "claimed": false, 00:04:08.006 "driver_specific": {}, 00:04:08.006 "memory_domains": [ 00:04:08.006 { 00:04:08.006 "dma_device_id": "system", 00:04:08.006 "dma_device_type": 1 00:04:08.006 }, 00:04:08.006 { 00:04:08.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.006 "dma_device_type": 2 00:04:08.006 } 00:04:08.006 ], 00:04:08.006 "name": "Malloc0", 00:04:08.006 "num_blocks": 16384, 00:04:08.006 "product_name": "Malloc disk", 00:04:08.006 "supported_io_types": { 00:04:08.006 "abort": true, 00:04:08.006 "compare": false, 00:04:08.006 "compare_and_write": false, 00:04:08.006 "copy": true, 00:04:08.006 "flush": true, 00:04:08.006 "get_zone_info": false, 00:04:08.006 "nvme_admin": false, 00:04:08.006 "nvme_io": false, 00:04:08.006 "nvme_io_md": false, 00:04:08.006 "nvme_iov_md": false, 00:04:08.006 "read": true, 00:04:08.006 "reset": true, 00:04:08.006 "seek_data": false, 00:04:08.006 "seek_hole": false, 00:04:08.006 "unmap": true, 00:04:08.006 "write": true, 00:04:08.006 "write_zeroes": true, 00:04:08.006 "zcopy": true, 00:04:08.006 "zone_append": false, 00:04:08.006 "zone_management": false 00:04:08.006 }, 00:04:08.006 "uuid": "41e0f1d6-1df2-4da6-805d-9f263cb00acb", 00:04:08.006 "zoned": false 00:04:08.006 } 00:04:08.006 ]' 00:04:08.006 17:51:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.264 [2024-07-24 17:51:15.024665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:08.264 [2024-07-24 17:51:15.024712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.264 [2024-07-24 17:51:15.024743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x818ad0 00:04:08.264 [2024-07-24 17:51:15.024752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.264 [2024-07-24 17:51:15.026179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.264 [2024-07-24 17:51:15.026213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.264 Passthru0 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.264 { 00:04:08.264 "aliases": [ 00:04:08.264 "41e0f1d6-1df2-4da6-805d-9f263cb00acb" 00:04:08.264 ], 00:04:08.264 "assigned_rate_limits": { 00:04:08.264 "r_mbytes_per_sec": 0, 00:04:08.264 "rw_ios_per_sec": 0, 00:04:08.264 "rw_mbytes_per_sec": 0, 00:04:08.264 "w_mbytes_per_sec": 0 00:04:08.264 }, 00:04:08.264 "block_size": 512, 00:04:08.264 "claim_type": "exclusive_write", 00:04:08.264 "claimed": true, 00:04:08.264 "driver_specific": {}, 00:04:08.264 "memory_domains": [ 00:04:08.264 { 00:04:08.264 "dma_device_id": "system", 00:04:08.264 "dma_device_type": 1 00:04:08.264 }, 00:04:08.264 { 00:04:08.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.264 "dma_device_type": 2 00:04:08.264 } 00:04:08.264 ], 00:04:08.264 "name": "Malloc0", 00:04:08.264 "num_blocks": 16384, 00:04:08.264 "product_name": "Malloc disk", 00:04:08.264 "supported_io_types": { 00:04:08.264 "abort": true, 00:04:08.264 "compare": false, 00:04:08.264 "compare_and_write": false, 00:04:08.264 "copy": true, 00:04:08.264 "flush": true, 00:04:08.264 "get_zone_info": false, 00:04:08.264 "nvme_admin": false, 00:04:08.264 "nvme_io": false, 00:04:08.264 "nvme_io_md": false, 00:04:08.264 "nvme_iov_md": false, 00:04:08.264 "read": true, 00:04:08.264 "reset": true, 00:04:08.264 "seek_data": false, 00:04:08.264 "seek_hole": false, 00:04:08.264 "unmap": true, 00:04:08.264 "write": true, 00:04:08.264 "write_zeroes": true, 00:04:08.264 "zcopy": true, 00:04:08.264 "zone_append": false, 00:04:08.264 "zone_management": false 00:04:08.264 }, 00:04:08.264 "uuid": "41e0f1d6-1df2-4da6-805d-9f263cb00acb", 00:04:08.264 "zoned": false 00:04:08.264 }, 00:04:08.264 { 00:04:08.264 "aliases": [ 00:04:08.264 "6fc055c3-9fda-5e93-8a13-4cb6a7c4a92e" 00:04:08.264 ], 00:04:08.264 "assigned_rate_limits": { 00:04:08.264 "r_mbytes_per_sec": 0, 00:04:08.264 "rw_ios_per_sec": 0, 00:04:08.264 "rw_mbytes_per_sec": 0, 00:04:08.264 "w_mbytes_per_sec": 0 00:04:08.264 }, 00:04:08.264 "block_size": 512, 00:04:08.264 "claimed": false, 00:04:08.264 "driver_specific": { 00:04:08.264 "passthru": { 00:04:08.264 "base_bdev_name": "Malloc0", 00:04:08.264 "name": "Passthru0" 00:04:08.264 } 00:04:08.264 }, 00:04:08.264 "memory_domains": [ 00:04:08.264 { 00:04:08.264 "dma_device_id": "system", 00:04:08.264 "dma_device_type": 1 00:04:08.264 }, 00:04:08.264 { 00:04:08.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.264 "dma_device_type": 2 00:04:08.264 } 00:04:08.264 ], 00:04:08.264 "name": "Passthru0", 00:04:08.264 "num_blocks": 16384, 00:04:08.264 "product_name": "passthru", 00:04:08.264 "supported_io_types": { 00:04:08.264 "abort": true, 00:04:08.264 "compare": false, 00:04:08.264 "compare_and_write": false, 00:04:08.264 "copy": true, 00:04:08.264 "flush": true, 00:04:08.264 "get_zone_info": false, 00:04:08.264 "nvme_admin": false, 00:04:08.264 "nvme_io": false, 00:04:08.264 "nvme_io_md": false, 00:04:08.264 "nvme_iov_md": false, 00:04:08.264 "read": true, 00:04:08.264 "reset": true, 00:04:08.264 "seek_data": false, 00:04:08.264 "seek_hole": false, 00:04:08.264 "unmap": true, 00:04:08.264 "write": true, 00:04:08.264 "write_zeroes": true, 00:04:08.264 "zcopy": true, 00:04:08.264 "zone_append": false, 00:04:08.264 "zone_management": false 00:04:08.264 }, 00:04:08.264 "uuid": "6fc055c3-9fda-5e93-8a13-4cb6a7c4a92e", 00:04:08.264 "zoned": false 00:04:08.264 } 00:04:08.264 ]' 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.264 ************************************ 00:04:08.264 END TEST rpc_integrity 00:04:08.264 ************************************ 00:04:08.264 17:51:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.264 00:04:08.264 real 0m0.304s 00:04:08.264 user 0m0.180s 00:04:08.264 sys 0m0.044s 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.264 17:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.264 17:51:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.264 17:51:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.264 17:51:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.264 17:51:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.264 ************************************ 00:04:08.264 START TEST rpc_plugins 00:04:08.264 ************************************ 00:04:08.264 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:08.265 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.265 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.265 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.527 { 00:04:08.527 "aliases": [ 00:04:08.527 "dfe250f7-b6e2-4596-8a03-107cfd86a6ef" 00:04:08.527 ], 00:04:08.527 "assigned_rate_limits": { 00:04:08.527 "r_mbytes_per_sec": 0, 00:04:08.527 "rw_ios_per_sec": 0, 00:04:08.527 "rw_mbytes_per_sec": 0, 00:04:08.527 "w_mbytes_per_sec": 0 00:04:08.527 }, 00:04:08.527 "block_size": 4096, 00:04:08.527 "claimed": false, 00:04:08.527 "driver_specific": {}, 00:04:08.527 "memory_domains": [ 00:04:08.527 { 00:04:08.527 "dma_device_id": "system", 00:04:08.527 "dma_device_type": 1 00:04:08.527 }, 00:04:08.527 { 00:04:08.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.527 "dma_device_type": 2 00:04:08.527 } 00:04:08.527 ], 00:04:08.527 "name": "Malloc1", 00:04:08.527 "num_blocks": 256, 00:04:08.527 "product_name": "Malloc disk", 00:04:08.527 "supported_io_types": { 00:04:08.527 "abort": true, 00:04:08.527 "compare": false, 00:04:08.527 "compare_and_write": false, 00:04:08.527 "copy": true, 00:04:08.527 "flush": true, 00:04:08.527 "get_zone_info": false, 00:04:08.527 "nvme_admin": false, 00:04:08.527 "nvme_io": false, 00:04:08.527 "nvme_io_md": false, 00:04:08.527 "nvme_iov_md": false, 00:04:08.527 "read": true, 00:04:08.527 "reset": true, 00:04:08.527 "seek_data": false, 00:04:08.527 "seek_hole": false, 00:04:08.527 "unmap": true, 00:04:08.527 "write": true, 00:04:08.527 "write_zeroes": true, 00:04:08.527 "zcopy": true, 00:04:08.527 "zone_append": false, 00:04:08.527 "zone_management": false 00:04:08.527 }, 00:04:08.527 "uuid": "dfe250f7-b6e2-4596-8a03-107cfd86a6ef", 00:04:08.527 "zoned": false 00:04:08.527 } 00:04:08.527 ]' 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:08.527 ************************************ 00:04:08.527 END TEST rpc_plugins 00:04:08.527 ************************************ 00:04:08.527 17:51:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:08.527 00:04:08.527 real 0m0.157s 00:04:08.527 user 0m0.090s 00:04:08.527 sys 0m0.031s 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.527 17:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.527 17:51:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:08.527 17:51:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.527 17:51:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.527 17:51:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.527 ************************************ 00:04:08.527 START TEST rpc_trace_cmd_test 00:04:08.527 ************************************ 00:04:08.527 17:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:08.527 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:08.527 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:08.527 17:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.527 17:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.527 17:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.527 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:08.527 "bdev": { 00:04:08.527 "mask": "0x8", 00:04:08.527 "tpoint_mask": "0xffffffffffffffff" 00:04:08.527 }, 00:04:08.527 "bdev_nvme": { 00:04:08.527 "mask": "0x4000", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "blobfs": { 00:04:08.527 "mask": "0x80", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "dsa": { 00:04:08.527 "mask": "0x200", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "ftl": { 00:04:08.527 "mask": "0x40", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "iaa": { 00:04:08.527 "mask": "0x1000", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "iscsi_conn": { 00:04:08.527 "mask": "0x2", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "nvme_pcie": { 00:04:08.527 "mask": "0x800", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "nvme_tcp": { 00:04:08.527 "mask": "0x2000", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "nvmf_rdma": { 00:04:08.527 "mask": "0x10", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "nvmf_tcp": { 00:04:08.527 "mask": "0x20", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "scsi": { 00:04:08.527 "mask": "0x4", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "sock": { 00:04:08.527 "mask": "0x8000", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "thread": { 00:04:08.527 "mask": "0x400", 00:04:08.527 "tpoint_mask": "0x0" 00:04:08.527 }, 00:04:08.527 "tpoint_group_mask": "0x8", 00:04:08.527 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60631" 00:04:08.527 }' 00:04:08.527 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:08.787 00:04:08.787 real 0m0.275s 00:04:08.787 user 0m0.226s 00:04:08.787 sys 0m0.039s 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.787 17:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.787 ************************************ 00:04:08.787 END TEST rpc_trace_cmd_test 00:04:08.787 ************************************ 00:04:08.787 17:51:15 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:08.787 17:51:15 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:08.787 17:51:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.787 17:51:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.787 17:51:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.048 ************************************ 00:04:09.048 START TEST go_rpc 00:04:09.048 ************************************ 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@1125 -- # go_rpc 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["277ebb8b-5c87-416a-b0a7-b34408f9d9e4"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"277ebb8b-5c87-416a-b0a7-b34408f9d9e4","zoned":false}]' 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:09.048 17:51:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:09.048 ************************************ 00:04:09.048 END TEST go_rpc 00:04:09.048 ************************************ 00:04:09.048 00:04:09.048 real 0m0.198s 00:04:09.048 user 0m0.128s 00:04:09.048 sys 0m0.042s 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.048 17:51:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.048 17:51:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.048 17:51:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.048 17:51:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.048 17:51:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.048 17:51:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.048 ************************************ 00:04:09.048 START TEST rpc_daemon_integrity 00:04:09.048 ************************************ 00:04:09.048 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:09.048 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.048 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.048 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.307 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.307 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.307 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.307 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.308 { 00:04:09.308 "aliases": [ 00:04:09.308 "9ed28143-3d39-4021-823e-da4a20484558" 00:04:09.308 ], 00:04:09.308 "assigned_rate_limits": { 00:04:09.308 "r_mbytes_per_sec": 0, 00:04:09.308 "rw_ios_per_sec": 0, 00:04:09.308 "rw_mbytes_per_sec": 0, 00:04:09.308 "w_mbytes_per_sec": 0 00:04:09.308 }, 00:04:09.308 "block_size": 512, 00:04:09.308 "claimed": false, 00:04:09.308 "driver_specific": {}, 00:04:09.308 "memory_domains": [ 00:04:09.308 { 00:04:09.308 "dma_device_id": "system", 00:04:09.308 "dma_device_type": 1 00:04:09.308 }, 00:04:09.308 { 00:04:09.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.308 "dma_device_type": 2 00:04:09.308 } 00:04:09.308 ], 00:04:09.308 "name": "Malloc3", 00:04:09.308 "num_blocks": 16384, 00:04:09.308 "product_name": "Malloc disk", 00:04:09.308 "supported_io_types": { 00:04:09.308 "abort": true, 00:04:09.308 "compare": false, 00:04:09.308 "compare_and_write": false, 00:04:09.308 "copy": true, 00:04:09.308 "flush": true, 00:04:09.308 "get_zone_info": false, 00:04:09.308 "nvme_admin": false, 00:04:09.308 "nvme_io": false, 00:04:09.308 "nvme_io_md": false, 00:04:09.308 "nvme_iov_md": false, 00:04:09.308 "read": true, 00:04:09.308 "reset": true, 00:04:09.308 "seek_data": false, 00:04:09.308 "seek_hole": false, 00:04:09.308 "unmap": true, 00:04:09.308 "write": true, 00:04:09.308 "write_zeroes": true, 00:04:09.308 "zcopy": true, 00:04:09.308 "zone_append": false, 00:04:09.308 "zone_management": false 00:04:09.308 }, 00:04:09.308 "uuid": "9ed28143-3d39-4021-823e-da4a20484558", 00:04:09.308 "zoned": false 00:04:09.308 } 00:04:09.308 ]' 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.308 [2024-07-24 17:51:16.149019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:09.308 [2024-07-24 17:51:16.149070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.308 [2024-07-24 17:51:16.149088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa0fd70 00:04:09.308 [2024-07-24 17:51:16.149097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.308 [2024-07-24 17:51:16.150402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.308 [2024-07-24 17:51:16.150429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.308 Passthru0 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.308 { 00:04:09.308 "aliases": [ 00:04:09.308 "9ed28143-3d39-4021-823e-da4a20484558" 00:04:09.308 ], 00:04:09.308 "assigned_rate_limits": { 00:04:09.308 "r_mbytes_per_sec": 0, 00:04:09.308 "rw_ios_per_sec": 0, 00:04:09.308 "rw_mbytes_per_sec": 0, 00:04:09.308 "w_mbytes_per_sec": 0 00:04:09.308 }, 00:04:09.308 "block_size": 512, 00:04:09.308 "claim_type": "exclusive_write", 00:04:09.308 "claimed": true, 00:04:09.308 "driver_specific": {}, 00:04:09.308 "memory_domains": [ 00:04:09.308 { 00:04:09.308 "dma_device_id": "system", 00:04:09.308 "dma_device_type": 1 00:04:09.308 }, 00:04:09.308 { 00:04:09.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.308 "dma_device_type": 2 00:04:09.308 } 00:04:09.308 ], 00:04:09.308 "name": "Malloc3", 00:04:09.308 "num_blocks": 16384, 00:04:09.308 "product_name": "Malloc disk", 00:04:09.308 "supported_io_types": { 00:04:09.308 "abort": true, 00:04:09.308 "compare": false, 00:04:09.308 "compare_and_write": false, 00:04:09.308 "copy": true, 00:04:09.308 "flush": true, 00:04:09.308 "get_zone_info": false, 00:04:09.308 "nvme_admin": false, 00:04:09.308 "nvme_io": false, 00:04:09.308 "nvme_io_md": false, 00:04:09.308 "nvme_iov_md": false, 00:04:09.308 "read": true, 00:04:09.308 "reset": true, 00:04:09.308 "seek_data": false, 00:04:09.308 "seek_hole": false, 00:04:09.308 "unmap": true, 00:04:09.308 "write": true, 00:04:09.308 "write_zeroes": true, 00:04:09.308 "zcopy": true, 00:04:09.308 "zone_append": false, 00:04:09.308 "zone_management": false 00:04:09.308 }, 00:04:09.308 "uuid": "9ed28143-3d39-4021-823e-da4a20484558", 00:04:09.308 "zoned": false 00:04:09.308 }, 00:04:09.308 { 00:04:09.308 "aliases": [ 00:04:09.308 "e09dd5a2-eeed-5b79-9724-bd28559f26e4" 00:04:09.308 ], 00:04:09.308 "assigned_rate_limits": { 00:04:09.308 "r_mbytes_per_sec": 0, 00:04:09.308 "rw_ios_per_sec": 0, 00:04:09.308 "rw_mbytes_per_sec": 0, 00:04:09.308 "w_mbytes_per_sec": 0 00:04:09.308 }, 00:04:09.308 "block_size": 512, 00:04:09.308 "claimed": false, 00:04:09.308 "driver_specific": { 00:04:09.308 "passthru": { 00:04:09.308 "base_bdev_name": "Malloc3", 00:04:09.308 "name": "Passthru0" 00:04:09.308 } 00:04:09.308 }, 00:04:09.308 "memory_domains": [ 00:04:09.308 { 00:04:09.308 "dma_device_id": "system", 00:04:09.308 "dma_device_type": 1 00:04:09.308 }, 00:04:09.308 { 00:04:09.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.308 "dma_device_type": 2 00:04:09.308 } 00:04:09.308 ], 00:04:09.308 "name": "Passthru0", 00:04:09.308 "num_blocks": 16384, 00:04:09.308 "product_name": "passthru", 00:04:09.308 "supported_io_types": { 00:04:09.308 "abort": true, 00:04:09.308 "compare": false, 00:04:09.308 "compare_and_write": false, 00:04:09.308 "copy": true, 00:04:09.308 "flush": true, 00:04:09.308 "get_zone_info": false, 00:04:09.308 "nvme_admin": false, 00:04:09.308 "nvme_io": false, 00:04:09.308 "nvme_io_md": false, 00:04:09.308 "nvme_iov_md": false, 00:04:09.308 "read": true, 00:04:09.308 "reset": true, 00:04:09.308 "seek_data": false, 00:04:09.308 "seek_hole": false, 00:04:09.308 "unmap": true, 00:04:09.308 "write": true, 00:04:09.308 "write_zeroes": true, 00:04:09.308 "zcopy": true, 00:04:09.308 "zone_append": false, 00:04:09.308 "zone_management": false 00:04:09.308 }, 00:04:09.308 "uuid": "e09dd5a2-eeed-5b79-9724-bd28559f26e4", 00:04:09.308 "zoned": false 00:04:09.308 } 00:04:09.308 ]' 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.308 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.566 17:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.566 00:04:09.566 real 0m0.311s 00:04:09.566 user 0m0.192s 00:04:09.566 sys 0m0.054s 00:04:09.566 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.566 17:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.566 ************************************ 00:04:09.566 END TEST rpc_daemon_integrity 00:04:09.566 ************************************ 00:04:09.566 17:51:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.566 17:51:16 rpc -- rpc/rpc.sh@84 -- # killprocess 60631 00:04:09.566 17:51:16 rpc -- common/autotest_common.sh@950 -- # '[' -z 60631 ']' 00:04:09.566 17:51:16 rpc -- common/autotest_common.sh@954 -- # kill -0 60631 00:04:09.566 17:51:16 rpc -- common/autotest_common.sh@955 -- # uname 00:04:09.566 17:51:16 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.567 17:51:16 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60631 00:04:09.567 17:51:16 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.567 17:51:16 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.567 killing process with pid 60631 00:04:09.567 17:51:16 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60631' 00:04:09.567 17:51:16 rpc -- common/autotest_common.sh@969 -- # kill 60631 00:04:09.567 17:51:16 rpc -- common/autotest_common.sh@974 -- # wait 60631 00:04:09.824 00:04:09.824 real 0m3.010s 00:04:09.824 user 0m3.904s 00:04:09.824 sys 0m0.844s 00:04:09.824 ************************************ 00:04:09.824 17:51:16 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.824 17:51:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.824 END TEST rpc 00:04:09.824 ************************************ 00:04:09.824 17:51:16 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:09.824 17:51:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.824 17:51:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.824 17:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:09.824 ************************************ 00:04:09.824 START TEST skip_rpc 00:04:09.824 ************************************ 00:04:09.824 17:51:16 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.082 * Looking for test storage... 00:04:10.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.082 17:51:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:10.082 17:51:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:10.082 17:51:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.082 17:51:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.082 17:51:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.082 17:51:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.082 ************************************ 00:04:10.082 START TEST skip_rpc 00:04:10.082 ************************************ 00:04:10.082 17:51:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:10.082 17:51:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60893 00:04:10.082 17:51:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.082 17:51:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.082 17:51:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:10.082 [2024-07-24 17:51:16.922295] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:10.082 [2024-07-24 17:51:16.922385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60893 ] 00:04:10.340 [2024-07-24 17:51:17.059478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.340 [2024-07-24 17:51:17.161230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.608 2024/07/24 17:51:21 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60893 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 60893 ']' 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 60893 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60893 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:15.608 killing process with pid 60893 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60893' 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 60893 00:04:15.608 17:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 60893 00:04:15.608 00:04:15.608 real 0m5.388s 00:04:15.608 user 0m5.039s 00:04:15.608 sys 0m0.247s 00:04:15.608 17:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.608 17:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.608 ************************************ 00:04:15.608 END TEST skip_rpc 00:04:15.608 ************************************ 00:04:15.608 17:51:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:15.608 17:51:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.608 17:51:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.608 17:51:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.608 ************************************ 00:04:15.608 START TEST skip_rpc_with_json 00:04:15.608 ************************************ 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60986 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60986 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 60986 ']' 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.608 17:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.608 [2024-07-24 17:51:22.380581] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:15.608 [2024-07-24 17:51:22.381379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60986 ] 00:04:15.608 [2024-07-24 17:51:22.520190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.866 [2024-07-24 17:51:22.626800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.434 [2024-07-24 17:51:23.334983] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.434 2024/07/24 17:51:23 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:16.434 request: 00:04:16.434 { 00:04:16.434 "method": "nvmf_get_transports", 00:04:16.434 "params": { 00:04:16.434 "trtype": "tcp" 00:04:16.434 } 00:04:16.434 } 00:04:16.434 Got JSON-RPC error response 00:04:16.434 GoRPCClient: error on JSON-RPC call 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.434 [2024-07-24 17:51:23.351052] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.434 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.692 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.692 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.692 { 00:04:16.692 "subsystems": [ 00:04:16.692 { 00:04:16.692 "subsystem": "keyring", 00:04:16.692 "config": [] 00:04:16.692 }, 00:04:16.692 { 00:04:16.692 "subsystem": "iobuf", 00:04:16.692 "config": [ 00:04:16.692 { 00:04:16.692 "method": "iobuf_set_options", 00:04:16.692 "params": { 00:04:16.692 "large_bufsize": 135168, 00:04:16.692 "large_pool_count": 1024, 00:04:16.692 "small_bufsize": 8192, 00:04:16.692 "small_pool_count": 8192 00:04:16.692 } 00:04:16.692 } 00:04:16.692 ] 00:04:16.692 }, 00:04:16.692 { 00:04:16.692 "subsystem": "sock", 00:04:16.692 "config": [ 00:04:16.692 { 00:04:16.692 "method": "sock_set_default_impl", 00:04:16.692 "params": { 00:04:16.692 "impl_name": "posix" 00:04:16.692 } 00:04:16.692 }, 00:04:16.692 { 00:04:16.692 "method": "sock_impl_set_options", 00:04:16.692 "params": { 00:04:16.692 "enable_ktls": false, 00:04:16.692 "enable_placement_id": 0, 00:04:16.692 "enable_quickack": false, 00:04:16.692 "enable_recv_pipe": true, 00:04:16.692 "enable_zerocopy_send_client": false, 00:04:16.692 "enable_zerocopy_send_server": true, 00:04:16.693 "impl_name": "ssl", 00:04:16.693 "recv_buf_size": 4096, 00:04:16.693 "send_buf_size": 4096, 00:04:16.693 "tls_version": 0, 00:04:16.693 "zerocopy_threshold": 0 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "sock_impl_set_options", 00:04:16.693 "params": { 00:04:16.693 "enable_ktls": false, 00:04:16.693 "enable_placement_id": 0, 00:04:16.693 "enable_quickack": false, 00:04:16.693 "enable_recv_pipe": true, 00:04:16.693 "enable_zerocopy_send_client": false, 00:04:16.693 "enable_zerocopy_send_server": true, 00:04:16.693 "impl_name": "posix", 00:04:16.693 "recv_buf_size": 2097152, 00:04:16.693 "send_buf_size": 2097152, 00:04:16.693 "tls_version": 0, 00:04:16.693 "zerocopy_threshold": 0 00:04:16.693 } 00:04:16.693 } 00:04:16.693 ] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "vmd", 00:04:16.693 "config": [] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "accel", 00:04:16.693 "config": [ 00:04:16.693 { 00:04:16.693 "method": "accel_set_options", 00:04:16.693 "params": { 00:04:16.693 "buf_count": 2048, 00:04:16.693 "large_cache_size": 16, 00:04:16.693 "sequence_count": 2048, 00:04:16.693 "small_cache_size": 128, 00:04:16.693 "task_count": 2048 00:04:16.693 } 00:04:16.693 } 00:04:16.693 ] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "bdev", 00:04:16.693 "config": [ 00:04:16.693 { 00:04:16.693 "method": "bdev_set_options", 00:04:16.693 "params": { 00:04:16.693 "bdev_auto_examine": true, 00:04:16.693 "bdev_io_cache_size": 256, 00:04:16.693 "bdev_io_pool_size": 65535, 00:04:16.693 "iobuf_large_cache_size": 16, 00:04:16.693 "iobuf_small_cache_size": 128 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "bdev_raid_set_options", 00:04:16.693 "params": { 00:04:16.693 "process_max_bandwidth_mb_sec": 0, 00:04:16.693 "process_window_size_kb": 1024 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "bdev_iscsi_set_options", 00:04:16.693 "params": { 00:04:16.693 "timeout_sec": 30 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "bdev_nvme_set_options", 00:04:16.693 "params": { 00:04:16.693 "action_on_timeout": "none", 00:04:16.693 "allow_accel_sequence": false, 00:04:16.693 "arbitration_burst": 0, 00:04:16.693 "bdev_retry_count": 3, 00:04:16.693 "ctrlr_loss_timeout_sec": 0, 00:04:16.693 "delay_cmd_submit": true, 00:04:16.693 "dhchap_dhgroups": [ 00:04:16.693 "null", 00:04:16.693 "ffdhe2048", 00:04:16.693 "ffdhe3072", 00:04:16.693 "ffdhe4096", 00:04:16.693 "ffdhe6144", 00:04:16.693 "ffdhe8192" 00:04:16.693 ], 00:04:16.693 "dhchap_digests": [ 00:04:16.693 "sha256", 00:04:16.693 "sha384", 00:04:16.693 "sha512" 00:04:16.693 ], 00:04:16.693 "disable_auto_failback": false, 00:04:16.693 "fast_io_fail_timeout_sec": 0, 00:04:16.693 "generate_uuids": false, 00:04:16.693 "high_priority_weight": 0, 00:04:16.693 "io_path_stat": false, 00:04:16.693 "io_queue_requests": 0, 00:04:16.693 "keep_alive_timeout_ms": 10000, 00:04:16.693 "low_priority_weight": 0, 00:04:16.693 "medium_priority_weight": 0, 00:04:16.693 "nvme_adminq_poll_period_us": 10000, 00:04:16.693 "nvme_error_stat": false, 00:04:16.693 "nvme_ioq_poll_period_us": 0, 00:04:16.693 "rdma_cm_event_timeout_ms": 0, 00:04:16.693 "rdma_max_cq_size": 0, 00:04:16.693 "rdma_srq_size": 0, 00:04:16.693 "reconnect_delay_sec": 0, 00:04:16.693 "timeout_admin_us": 0, 00:04:16.693 "timeout_us": 0, 00:04:16.693 "transport_ack_timeout": 0, 00:04:16.693 "transport_retry_count": 4, 00:04:16.693 "transport_tos": 0 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "bdev_nvme_set_hotplug", 00:04:16.693 "params": { 00:04:16.693 "enable": false, 00:04:16.693 "period_us": 100000 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "bdev_wait_for_examine" 00:04:16.693 } 00:04:16.693 ] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "scsi", 00:04:16.693 "config": null 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "scheduler", 00:04:16.693 "config": [ 00:04:16.693 { 00:04:16.693 "method": "framework_set_scheduler", 00:04:16.693 "params": { 00:04:16.693 "name": "static" 00:04:16.693 } 00:04:16.693 } 00:04:16.693 ] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "vhost_scsi", 00:04:16.693 "config": [] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "vhost_blk", 00:04:16.693 "config": [] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "ublk", 00:04:16.693 "config": [] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "nbd", 00:04:16.693 "config": [] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "nvmf", 00:04:16.693 "config": [ 00:04:16.693 { 00:04:16.693 "method": "nvmf_set_config", 00:04:16.693 "params": { 00:04:16.693 "admin_cmd_passthru": { 00:04:16.693 "identify_ctrlr": false 00:04:16.693 }, 00:04:16.693 "discovery_filter": "match_any" 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "nvmf_set_max_subsystems", 00:04:16.693 "params": { 00:04:16.693 "max_subsystems": 1024 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "nvmf_set_crdt", 00:04:16.693 "params": { 00:04:16.693 "crdt1": 0, 00:04:16.693 "crdt2": 0, 00:04:16.693 "crdt3": 0 00:04:16.693 } 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "method": "nvmf_create_transport", 00:04:16.693 "params": { 00:04:16.693 "abort_timeout_sec": 1, 00:04:16.693 "ack_timeout": 0, 00:04:16.693 "buf_cache_size": 4294967295, 00:04:16.693 "c2h_success": true, 00:04:16.693 "data_wr_pool_size": 0, 00:04:16.693 "dif_insert_or_strip": false, 00:04:16.693 "in_capsule_data_size": 4096, 00:04:16.693 "io_unit_size": 131072, 00:04:16.693 "max_aq_depth": 128, 00:04:16.693 "max_io_qpairs_per_ctrlr": 127, 00:04:16.693 "max_io_size": 131072, 00:04:16.693 "max_queue_depth": 128, 00:04:16.693 "num_shared_buffers": 511, 00:04:16.693 "sock_priority": 0, 00:04:16.693 "trtype": "TCP", 00:04:16.693 "zcopy": false 00:04:16.693 } 00:04:16.693 } 00:04:16.693 ] 00:04:16.693 }, 00:04:16.693 { 00:04:16.693 "subsystem": "iscsi", 00:04:16.693 "config": [ 00:04:16.693 { 00:04:16.693 "method": "iscsi_set_options", 00:04:16.693 "params": { 00:04:16.693 "allow_duplicated_isid": false, 00:04:16.693 "chap_group": 0, 00:04:16.693 "data_out_pool_size": 2048, 00:04:16.693 "default_time2retain": 20, 00:04:16.693 "default_time2wait": 2, 00:04:16.693 "disable_chap": false, 00:04:16.693 "error_recovery_level": 0, 00:04:16.693 "first_burst_length": 8192, 00:04:16.693 "immediate_data": true, 00:04:16.693 "immediate_data_pool_size": 16384, 00:04:16.693 "max_connections_per_session": 2, 00:04:16.693 "max_large_datain_per_connection": 64, 00:04:16.693 "max_queue_depth": 64, 00:04:16.693 "max_r2t_per_connection": 4, 00:04:16.693 "max_sessions": 128, 00:04:16.693 "mutual_chap": false, 00:04:16.693 "node_base": "iqn.2016-06.io.spdk", 00:04:16.693 "nop_in_interval": 30, 00:04:16.693 "nop_timeout": 60, 00:04:16.693 "pdu_pool_size": 36864, 00:04:16.693 "require_chap": false 00:04:16.693 } 00:04:16.693 } 00:04:16.693 ] 00:04:16.693 } 00:04:16.693 ] 00:04:16.693 } 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60986 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 60986 ']' 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 60986 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60986 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.693 killing process with pid 60986 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60986' 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 60986 00:04:16.693 17:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 60986 00:04:16.951 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61020 00:04:16.951 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.951 17:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:22.222 17:51:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61020 00:04:22.222 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 61020 ']' 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 61020 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61020 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.223 killing process with pid 61020 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61020' 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 61020 00:04:22.223 17:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 61020 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.482 00:04:22.482 real 0m6.959s 00:04:22.482 user 0m6.732s 00:04:22.482 sys 0m0.598s 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.482 ************************************ 00:04:22.482 END TEST skip_rpc_with_json 00:04:22.482 ************************************ 00:04:22.482 17:51:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.482 17:51:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.482 17:51:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.482 17:51:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.482 ************************************ 00:04:22.482 START TEST skip_rpc_with_delay 00:04:22.482 ************************************ 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.482 [2024-07-24 17:51:29.394612] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.482 [2024-07-24 17:51:29.394770] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:22.482 00:04:22.482 real 0m0.089s 00:04:22.482 user 0m0.052s 00:04:22.482 sys 0m0.036s 00:04:22.482 ************************************ 00:04:22.482 END TEST skip_rpc_with_delay 00:04:22.482 ************************************ 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.482 17:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.482 17:51:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.742 17:51:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.742 17:51:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.742 17:51:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.742 17:51:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.742 17:51:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.742 ************************************ 00:04:22.742 START TEST exit_on_failed_rpc_init 00:04:22.742 ************************************ 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61135 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61135 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 61135 ']' 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.742 17:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.742 [2024-07-24 17:51:29.523147] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:22.742 [2024-07-24 17:51:29.523285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61135 ] 00:04:22.742 [2024-07-24 17:51:29.662082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.001 [2024-07-24 17:51:29.767670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.568 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.827 [2024-07-24 17:51:30.550557] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:23.827 [2024-07-24 17:51:30.550658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61166 ] 00:04:23.827 [2024-07-24 17:51:30.701930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.086 [2024-07-24 17:51:30.832533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.086 [2024-07-24 17:51:30.832634] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:24.086 [2024-07-24 17:51:30.832651] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:24.086 [2024-07-24 17:51:30.832663] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61135 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 61135 ']' 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 61135 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61135 00:04:24.086 killing process with pid 61135 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61135' 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 61135 00:04:24.086 17:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 61135 00:04:24.344 00:04:24.344 real 0m1.821s 00:04:24.344 user 0m2.165s 00:04:24.344 sys 0m0.420s 00:04:24.344 17:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.344 17:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.344 ************************************ 00:04:24.344 END TEST exit_on_failed_rpc_init 00:04:24.344 ************************************ 00:04:24.606 17:51:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.606 00:04:24.606 real 0m14.560s 00:04:24.606 user 0m14.085s 00:04:24.606 sys 0m1.501s 00:04:24.606 17:51:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.606 17:51:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.606 ************************************ 00:04:24.606 END TEST skip_rpc 00:04:24.606 ************************************ 00:04:24.606 17:51:31 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.606 17:51:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.606 17:51:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.606 17:51:31 -- common/autotest_common.sh@10 -- # set +x 00:04:24.606 ************************************ 00:04:24.606 START TEST rpc_client 00:04:24.606 ************************************ 00:04:24.606 17:51:31 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.606 * Looking for test storage... 00:04:24.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:24.606 17:51:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:24.606 OK 00:04:24.606 17:51:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.606 00:04:24.606 real 0m0.105s 00:04:24.606 user 0m0.044s 00:04:24.606 sys 0m0.065s 00:04:24.606 17:51:31 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.606 ************************************ 00:04:24.606 END TEST rpc_client 00:04:24.606 17:51:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.606 ************************************ 00:04:24.606 17:51:31 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.606 17:51:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.606 17:51:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.606 17:51:31 -- common/autotest_common.sh@10 -- # set +x 00:04:24.606 ************************************ 00:04:24.606 START TEST json_config 00:04:24.606 ************************************ 00:04:24.606 17:51:31 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.866 17:51:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.866 17:51:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.866 17:51:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.866 17:51:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.866 17:51:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.866 17:51:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.866 17:51:31 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.866 17:51:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@47 -- # : 0 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:24.866 17:51:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.866 INFO: JSON configuration test init 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.866 17:51:31 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.866 17:51:31 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.866 17:51:31 json_config -- json_config/common.sh@10 -- # shift 00:04:24.866 17:51:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.866 17:51:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.866 17:51:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.866 17:51:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.866 17:51:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.866 17:51:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61284 00:04:24.866 Waiting for target to run... 00:04:24.866 17:51:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.866 17:51:31 json_config -- json_config/common.sh@25 -- # waitforlisten 61284 /var/tmp/spdk_tgt.sock 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@831 -- # '[' -z 61284 ']' 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.866 17:51:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.866 17:51:31 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.866 [2024-07-24 17:51:31.717444] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:24.866 [2024-07-24 17:51:31.717615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61284 ] 00:04:25.125 [2024-07-24 17:51:32.100435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.384 [2024-07-24 17:51:32.196328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.950 17:51:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.950 00:04:25.950 17:51:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:25.950 17:51:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.950 17:51:32 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:25.950 17:51:32 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:25.950 17:51:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.950 17:51:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.950 17:51:32 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:25.950 17:51:32 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:25.950 17:51:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.950 17:51:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.950 17:51:32 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.950 17:51:32 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:25.951 17:51:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.525 17:51:33 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:26.525 17:51:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.525 17:51:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.525 17:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.525 17:51:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.525 17:51:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.525 17:51:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.525 17:51:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:26.525 17:51:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:26.525 17:51:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@51 -- # sort 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:26.786 17:51:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.786 17:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:26.786 17:51:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.786 17:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:26.786 17:51:33 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.786 17:51:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:27.045 MallocForNvmf0 00:04:27.045 17:51:33 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:27.045 17:51:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:27.303 MallocForNvmf1 00:04:27.303 17:51:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.303 17:51:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.562 [2024-07-24 17:51:34.417714] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.562 17:51:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.562 17:51:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.822 17:51:34 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.822 17:51:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:28.081 17:51:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:28.081 17:51:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:28.338 17:51:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.338 17:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.595 [2024-07-24 17:51:35.374185] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:28.595 17:51:35 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:28.595 17:51:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.595 17:51:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.595 17:51:35 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:28.595 17:51:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.595 17:51:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.595 17:51:35 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:28.595 17:51:35 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.595 17:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.853 MallocBdevForConfigChangeCheck 00:04:28.853 17:51:35 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:28.853 17:51:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.853 17:51:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.853 17:51:35 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:28.853 17:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.421 INFO: shutting down applications... 00:04:29.421 17:51:36 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:29.421 17:51:36 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:29.421 17:51:36 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:29.421 17:51:36 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:29.421 17:51:36 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.680 Calling clear_iscsi_subsystem 00:04:29.680 Calling clear_nvmf_subsystem 00:04:29.680 Calling clear_nbd_subsystem 00:04:29.680 Calling clear_ublk_subsystem 00:04:29.680 Calling clear_vhost_blk_subsystem 00:04:29.680 Calling clear_vhost_scsi_subsystem 00:04:29.680 Calling clear_bdev_subsystem 00:04:29.680 17:51:36 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:29.680 17:51:36 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:29.680 17:51:36 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:29.680 17:51:36 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.680 17:51:36 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.680 17:51:36 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.938 17:51:36 json_config -- json_config/json_config.sh@349 -- # break 00:04:29.939 17:51:36 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:29.939 17:51:36 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:29.939 17:51:36 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.939 17:51:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.939 17:51:36 json_config -- json_config/common.sh@35 -- # [[ -n 61284 ]] 00:04:29.939 17:51:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61284 00:04:29.939 17:51:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.939 17:51:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.939 17:51:36 json_config -- json_config/common.sh@41 -- # kill -0 61284 00:04:29.939 17:51:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.532 17:51:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.532 17:51:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.532 17:51:37 json_config -- json_config/common.sh@41 -- # kill -0 61284 00:04:30.532 17:51:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.532 17:51:37 json_config -- json_config/common.sh@43 -- # break 00:04:30.532 17:51:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.532 SPDK target shutdown done 00:04:30.533 17:51:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.533 INFO: relaunching applications... 00:04:30.533 17:51:37 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:30.533 17:51:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.533 17:51:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.533 17:51:37 json_config -- json_config/common.sh@10 -- # shift 00:04:30.533 17:51:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.533 17:51:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.533 17:51:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.533 17:51:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.533 17:51:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.533 Waiting for target to run... 00:04:30.533 17:51:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61560 00:04:30.533 17:51:37 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.533 17:51:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.533 17:51:37 json_config -- json_config/common.sh@25 -- # waitforlisten 61560 /var/tmp/spdk_tgt.sock 00:04:30.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.533 17:51:37 json_config -- common/autotest_common.sh@831 -- # '[' -z 61560 ']' 00:04:30.533 17:51:37 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.533 17:51:37 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.533 17:51:37 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.533 17:51:37 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.533 17:51:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.533 [2024-07-24 17:51:37.475704] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:30.533 [2024-07-24 17:51:37.475806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61560 ] 00:04:31.145 [2024-07-24 17:51:37.875554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.145 [2024-07-24 17:51:37.955708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.403 [2024-07-24 17:51:38.276288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.403 [2024-07-24 17:51:38.308338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.662 17:51:38 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.662 17:51:38 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:31.662 00:04:31.662 17:51:38 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.662 17:51:38 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:31.662 INFO: Checking if target configuration is the same... 00:04:31.662 17:51:38 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.662 17:51:38 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:31.662 17:51:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.662 17:51:38 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.662 + '[' 2 -ne 2 ']' 00:04:31.662 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:31.662 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:31.662 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:31.662 +++ basename /dev/fd/62 00:04:31.662 ++ mktemp /tmp/62.XXX 00:04:31.662 + tmp_file_1=/tmp/62.h1y 00:04:31.662 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.662 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.662 + tmp_file_2=/tmp/spdk_tgt_config.json.rms 00:04:31.662 + ret=0 00:04:31.662 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.921 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.921 + diff -u /tmp/62.h1y /tmp/spdk_tgt_config.json.rms 00:04:31.921 INFO: JSON config files are the same 00:04:31.921 + echo 'INFO: JSON config files are the same' 00:04:31.921 + rm /tmp/62.h1y /tmp/spdk_tgt_config.json.rms 00:04:31.921 + exit 0 00:04:32.179 INFO: changing configuration and checking if this can be detected... 00:04:32.179 17:51:38 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:32.179 17:51:38 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:32.179 17:51:38 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.179 17:51:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.179 17:51:39 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:32.179 17:51:39 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.179 17:51:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.179 + '[' 2 -ne 2 ']' 00:04:32.179 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:32.179 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:32.179 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:32.179 +++ basename /dev/fd/62 00:04:32.179 ++ mktemp /tmp/62.XXX 00:04:32.179 + tmp_file_1=/tmp/62.gqh 00:04:32.179 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.179 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.179 + tmp_file_2=/tmp/spdk_tgt_config.json.hW8 00:04:32.179 + ret=0 00:04:32.179 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.745 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.745 + diff -u /tmp/62.gqh /tmp/spdk_tgt_config.json.hW8 00:04:32.745 + ret=1 00:04:32.745 + echo '=== Start of file: /tmp/62.gqh ===' 00:04:32.745 + cat /tmp/62.gqh 00:04:32.745 + echo '=== End of file: /tmp/62.gqh ===' 00:04:32.745 + echo '' 00:04:32.745 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hW8 ===' 00:04:32.745 + cat /tmp/spdk_tgt_config.json.hW8 00:04:32.745 + echo '=== End of file: /tmp/spdk_tgt_config.json.hW8 ===' 00:04:32.745 + echo '' 00:04:32.745 + rm /tmp/62.gqh /tmp/spdk_tgt_config.json.hW8 00:04:32.745 + exit 1 00:04:32.745 INFO: configuration change detected. 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@321 -- # [[ -n 61560 ]] 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.745 17:51:39 json_config -- json_config/json_config.sh@327 -- # killprocess 61560 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@950 -- # '[' -z 61560 ']' 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@954 -- # kill -0 61560 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@955 -- # uname 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61560 00:04:32.745 killing process with pid 61560 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61560' 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@969 -- # kill 61560 00:04:32.745 17:51:39 json_config -- common/autotest_common.sh@974 -- # wait 61560 00:04:33.034 17:51:39 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.034 17:51:39 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:33.034 17:51:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:33.034 17:51:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.034 INFO: Success 00:04:33.034 17:51:39 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:33.034 17:51:39 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:33.034 ************************************ 00:04:33.034 END TEST json_config 00:04:33.034 ************************************ 00:04:33.034 00:04:33.034 real 0m8.389s 00:04:33.034 user 0m11.819s 00:04:33.034 sys 0m2.041s 00:04:33.034 17:51:39 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.034 17:51:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.034 17:51:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:33.034 17:51:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.034 17:51:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.034 17:51:39 -- common/autotest_common.sh@10 -- # set +x 00:04:33.034 ************************************ 00:04:33.034 START TEST json_config_extra_key 00:04:33.034 ************************************ 00:04:33.034 17:51:39 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.293 17:51:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.293 17:51:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.293 17:51:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.293 17:51:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.293 17:51:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.293 17:51:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.293 17:51:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:33.293 17:51:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:33.293 17:51:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.293 INFO: launching applications... 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:33.293 17:51:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:33.293 17:51:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:33.293 17:51:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:33.293 17:51:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.293 17:51:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.293 17:51:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.293 17:51:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.293 17:51:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.293 Waiting for target to run... 00:04:33.293 17:51:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61736 00:04:33.294 17:51:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.294 17:51:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61736 /var/tmp/spdk_tgt.sock 00:04:33.294 17:51:40 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 61736 ']' 00:04:33.294 17:51:40 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.294 17:51:40 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.294 17:51:40 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.294 17:51:40 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.294 17:51:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.294 17:51:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:33.294 [2024-07-24 17:51:40.135451] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:33.294 [2024-07-24 17:51:40.135566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61736 ] 00:04:33.553 [2024-07-24 17:51:40.517050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.812 [2024-07-24 17:51:40.598190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.379 00:04:34.379 INFO: shutting down applications... 00:04:34.379 17:51:41 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.379 17:51:41 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.379 17:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.379 17:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61736 ]] 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61736 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61736 00:04:34.379 17:51:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.946 17:51:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.946 17:51:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.946 17:51:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61736 00:04:34.946 17:51:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.946 17:51:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.946 SPDK target shutdown done 00:04:34.946 Success 00:04:34.946 17:51:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.946 17:51:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.946 17:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.946 00:04:34.946 real 0m1.685s 00:04:34.946 user 0m1.565s 00:04:34.946 sys 0m0.421s 00:04:34.946 17:51:41 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.946 17:51:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 ************************************ 00:04:34.946 END TEST json_config_extra_key 00:04:34.946 ************************************ 00:04:34.946 17:51:41 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.946 17:51:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.946 17:51:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.946 17:51:41 -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 ************************************ 00:04:34.946 START TEST alias_rpc 00:04:34.946 ************************************ 00:04:34.946 17:51:41 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.946 * Looking for test storage... 00:04:34.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:34.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.946 17:51:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.946 17:51:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61818 00:04:34.946 17:51:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61818 00:04:34.946 17:51:41 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 61818 ']' 00:04:34.946 17:51:41 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.946 17:51:41 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.946 17:51:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.946 17:51:41 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.946 17:51:41 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.946 17:51:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 [2024-07-24 17:51:41.884966] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:34.946 [2024-07-24 17:51:41.885074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61818 ] 00:04:35.204 [2024-07-24 17:51:42.027797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.204 [2024-07-24 17:51:42.131673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.196 17:51:42 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.196 17:51:42 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:36.196 17:51:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:36.454 17:51:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61818 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 61818 ']' 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 61818 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61818 00:04:36.454 killing process with pid 61818 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61818' 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@969 -- # kill 61818 00:04:36.454 17:51:43 alias_rpc -- common/autotest_common.sh@974 -- # wait 61818 00:04:36.712 ************************************ 00:04:36.712 END TEST alias_rpc 00:04:36.712 ************************************ 00:04:36.712 00:04:36.712 real 0m1.822s 00:04:36.712 user 0m2.121s 00:04:36.712 sys 0m0.454s 00:04:36.712 17:51:43 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.712 17:51:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.712 17:51:43 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:04:36.712 17:51:43 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.712 17:51:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.712 17:51:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.712 17:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:36.712 ************************************ 00:04:36.712 START TEST dpdk_mem_utility 00:04:36.712 ************************************ 00:04:36.712 17:51:43 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.971 * Looking for test storage... 00:04:36.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:36.971 17:51:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:36.971 17:51:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61906 00:04:36.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.971 17:51:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61906 00:04:36.971 17:51:43 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 61906 ']' 00:04:36.971 17:51:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.971 17:51:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.971 17:51:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.971 17:51:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.971 17:51:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.971 17:51:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.971 [2024-07-24 17:51:43.768833] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:36.971 [2024-07-24 17:51:43.768945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61906 ] 00:04:36.971 [2024-07-24 17:51:43.908365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.229 [2024-07-24 17:51:44.028765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.835 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.835 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:37.835 17:51:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:37.835 17:51:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:37.835 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.835 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.835 { 00:04:37.835 "filename": "/tmp/spdk_mem_dump.txt" 00:04:37.835 } 00:04:37.835 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.835 17:51:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:38.094 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:38.094 1 heaps totaling size 814.000000 MiB 00:04:38.094 size: 814.000000 MiB heap id: 0 00:04:38.094 end heaps---------- 00:04:38.094 8 mempools totaling size 598.116089 MiB 00:04:38.094 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:38.094 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:38.094 size: 84.521057 MiB name: bdev_io_61906 00:04:38.094 size: 51.011292 MiB name: evtpool_61906 00:04:38.094 size: 50.003479 MiB name: msgpool_61906 00:04:38.094 size: 21.763794 MiB name: PDU_Pool 00:04:38.094 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:38.094 size: 0.026123 MiB name: Session_Pool 00:04:38.094 end mempools------- 00:04:38.094 6 memzones totaling size 4.142822 MiB 00:04:38.094 size: 1.000366 MiB name: RG_ring_0_61906 00:04:38.094 size: 1.000366 MiB name: RG_ring_1_61906 00:04:38.094 size: 1.000366 MiB name: RG_ring_4_61906 00:04:38.094 size: 1.000366 MiB name: RG_ring_5_61906 00:04:38.094 size: 0.125366 MiB name: RG_ring_2_61906 00:04:38.094 size: 0.015991 MiB name: RG_ring_3_61906 00:04:38.094 end memzones------- 00:04:38.094 17:51:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:38.094 heap id: 0 total size: 814.000000 MiB number of busy elements: 229 number of free elements: 15 00:04:38.094 list of free elements. size: 12.484924 MiB 00:04:38.094 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:38.094 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:38.094 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:38.094 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:38.094 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:38.094 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:38.094 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:38.094 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:38.094 element at address: 0x200000200000 with size: 0.836853 MiB 00:04:38.094 element at address: 0x20001aa00000 with size: 0.571533 MiB 00:04:38.094 element at address: 0x20000b200000 with size: 0.489441 MiB 00:04:38.094 element at address: 0x200000800000 with size: 0.486877 MiB 00:04:38.094 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:38.094 element at address: 0x200027e00000 with size: 0.397949 MiB 00:04:38.094 element at address: 0x200003a00000 with size: 0.351501 MiB 00:04:38.094 list of standard malloc elements. size: 199.252502 MiB 00:04:38.094 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:38.094 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:38.094 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:38.094 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:38.094 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:38.094 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:38.094 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:38.094 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:38.094 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:38.094 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:38.094 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:38.095 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:38.095 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:38.096 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:38.096 list of memzone associated elements. size: 602.262573 MiB 00:04:38.096 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:38.096 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:38.096 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:38.096 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:38.096 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:38.096 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61906_0 00:04:38.096 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:38.096 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61906_0 00:04:38.096 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:38.096 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61906_0 00:04:38.096 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:38.096 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:38.096 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:38.096 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:38.096 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:38.096 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61906 00:04:38.096 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:38.096 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61906 00:04:38.096 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:38.096 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61906 00:04:38.096 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:38.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:38.096 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:38.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:38.096 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:38.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:38.096 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:38.096 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:38.096 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:38.096 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61906 00:04:38.096 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:38.096 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61906 00:04:38.096 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:38.096 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61906 00:04:38.096 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:38.096 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61906 00:04:38.096 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:38.096 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61906 00:04:38.096 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:38.096 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:38.096 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:38.096 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:38.096 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:38.096 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:38.096 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:38.096 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61906 00:04:38.096 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:38.096 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:38.096 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:04:38.096 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:38.096 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:38.096 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61906 00:04:38.096 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:04:38.096 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:38.096 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:38.096 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61906 00:04:38.096 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:38.096 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61906 00:04:38.096 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:04:38.096 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:38.096 17:51:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:38.096 17:51:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61906 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 61906 ']' 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 61906 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61906 00:04:38.096 killing process with pid 61906 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61906' 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 61906 00:04:38.096 17:51:44 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 61906 00:04:38.354 ************************************ 00:04:38.354 END TEST dpdk_mem_utility 00:04:38.354 ************************************ 00:04:38.354 00:04:38.354 real 0m1.667s 00:04:38.354 user 0m1.827s 00:04:38.354 sys 0m0.439s 00:04:38.354 17:51:45 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.354 17:51:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.354 17:51:45 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.354 17:51:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.354 17:51:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.354 17:51:45 -- common/autotest_common.sh@10 -- # set +x 00:04:38.354 ************************************ 00:04:38.354 START TEST event 00:04:38.354 ************************************ 00:04:38.354 17:51:45 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.613 * Looking for test storage... 00:04:38.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:38.613 17:51:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:38.613 17:51:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.613 17:51:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.613 17:51:45 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:38.613 17:51:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.613 17:51:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.613 ************************************ 00:04:38.613 START TEST event_perf 00:04:38.613 ************************************ 00:04:38.613 17:51:45 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.613 Running I/O for 1 seconds...[2024-07-24 17:51:45.454495] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:38.613 [2024-07-24 17:51:45.454697] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61995 ] 00:04:38.872 [2024-07-24 17:51:45.601034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.872 [2024-07-24 17:51:45.723421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.872 [2024-07-24 17:51:45.723777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.872 [2024-07-24 17:51:45.723780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.872 Running I/O for 1 seconds...[2024-07-24 17:51:45.723596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.248 00:04:40.249 lcore 0: 192426 00:04:40.249 lcore 1: 192426 00:04:40.249 lcore 2: 192424 00:04:40.249 lcore 3: 192426 00:04:40.249 done. 00:04:40.249 00:04:40.249 real 0m1.366s 00:04:40.249 user 0m4.166s 00:04:40.249 sys 0m0.076s 00:04:40.249 17:51:46 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.249 17:51:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.249 ************************************ 00:04:40.249 END TEST event_perf 00:04:40.249 ************************************ 00:04:40.249 17:51:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:40.249 17:51:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:40.249 17:51:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.249 17:51:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.249 ************************************ 00:04:40.249 START TEST event_reactor 00:04:40.249 ************************************ 00:04:40.249 17:51:46 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:40.249 [2024-07-24 17:51:46.876740] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:40.249 [2024-07-24 17:51:46.877342] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62034 ] 00:04:40.249 [2024-07-24 17:51:47.021815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.249 [2024-07-24 17:51:47.126386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.623 test_start 00:04:41.623 oneshot 00:04:41.623 tick 100 00:04:41.623 tick 100 00:04:41.623 tick 250 00:04:41.623 tick 100 00:04:41.623 tick 100 00:04:41.623 tick 100 00:04:41.623 tick 250 00:04:41.623 tick 500 00:04:41.623 tick 100 00:04:41.623 tick 100 00:04:41.623 tick 250 00:04:41.623 tick 100 00:04:41.623 tick 100 00:04:41.623 test_end 00:04:41.623 00:04:41.623 real 0m1.354s 00:04:41.623 user 0m1.190s 00:04:41.623 sys 0m0.056s 00:04:41.623 ************************************ 00:04:41.623 END TEST event_reactor 00:04:41.623 ************************************ 00:04:41.623 17:51:48 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.623 17:51:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:41.623 17:51:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.623 17:51:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:41.623 17:51:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.623 17:51:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.623 ************************************ 00:04:41.623 START TEST event_reactor_perf 00:04:41.623 ************************************ 00:04:41.623 17:51:48 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.623 [2024-07-24 17:51:48.287358] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:41.623 [2024-07-24 17:51:48.287470] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62069 ] 00:04:41.623 [2024-07-24 17:51:48.432870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.623 [2024-07-24 17:51:48.548533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.994 test_start 00:04:42.994 test_end 00:04:42.994 Performance: 416778 events per second 00:04:42.994 00:04:42.994 real 0m1.365s 00:04:42.994 user 0m1.194s 00:04:42.994 sys 0m0.061s 00:04:42.994 ************************************ 00:04:42.994 END TEST event_reactor_perf 00:04:42.994 ************************************ 00:04:42.994 17:51:49 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.994 17:51:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.994 17:51:49 event -- event/event.sh@49 -- # uname -s 00:04:42.994 17:51:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:42.994 17:51:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:42.994 17:51:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.994 17:51:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.994 17:51:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.994 ************************************ 00:04:42.994 START TEST event_scheduler 00:04:42.994 ************************************ 00:04:42.994 17:51:49 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:42.994 * Looking for test storage... 00:04:42.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:42.994 17:51:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:42.994 17:51:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62131 00:04:42.994 17:51:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.994 17:51:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:42.994 17:51:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62131 00:04:42.994 17:51:49 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 62131 ']' 00:04:42.994 17:51:49 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.994 17:51:49 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.994 17:51:49 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.995 17:51:49 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.995 17:51:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.995 [2024-07-24 17:51:49.858637] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:42.995 [2024-07-24 17:51:49.859576] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62131 ] 00:04:43.252 [2024-07-24 17:51:50.005725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.252 [2024-07-24 17:51:50.127546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.252 [2024-07-24 17:51:50.127697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.252 [2024-07-24 17:51:50.127863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.252 [2024-07-24 17:51:50.127866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:44.188 17:51:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.188 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.188 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.188 POWER: Cannot set governor of lcore 0 to performance 00:04:44.188 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.188 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.188 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.188 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.188 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:44.188 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:44.188 POWER: Unable to set Power Management Environment for lcore 0 00:04:44.188 [2024-07-24 17:51:50.882625] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:44.188 [2024-07-24 17:51:50.882638] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:44.188 [2024-07-24 17:51:50.882647] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.188 [2024-07-24 17:51:50.882658] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.188 [2024-07-24 17:51:50.882667] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.188 [2024-07-24 17:51:50.882674] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 [2024-07-24 17:51:50.959267] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.188 17:51:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 ************************************ 00:04:44.188 START TEST scheduler_create_thread 00:04:44.188 ************************************ 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 2 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 3 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 4 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 5 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 6 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 7 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 8 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 9 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 10 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.188 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:44.189 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.189 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.755 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.755 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:44.755 17:51:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:44.755 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.755 17:51:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.128 ************************************ 00:04:46.128 END TEST scheduler_create_thread 00:04:46.128 ************************************ 00:04:46.128 17:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.128 00:04:46.128 real 0m1.753s 00:04:46.128 user 0m0.018s 00:04:46.128 sys 0m0.008s 00:04:46.128 17:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.128 17:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.128 17:51:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:46.128 17:51:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62131 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 62131 ']' 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 62131 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62131 00:04:46.128 killing process with pid 62131 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62131' 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 62131 00:04:46.128 17:51:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 62131 00:04:46.387 [2024-07-24 17:51:53.205539] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:46.644 00:04:46.645 real 0m3.700s 00:04:46.645 user 0m6.732s 00:04:46.645 sys 0m0.395s 00:04:46.645 17:51:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.645 ************************************ 00:04:46.645 END TEST event_scheduler 00:04:46.645 ************************************ 00:04:46.645 17:51:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.645 17:51:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.645 17:51:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.645 17:51:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.645 17:51:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.645 17:51:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.645 ************************************ 00:04:46.645 START TEST app_repeat 00:04:46.645 ************************************ 00:04:46.645 17:51:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62237 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.645 Process app_repeat pid: 62237 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62237' 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.645 spdk_app_start Round 0 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:46.645 17:51:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62237 /var/tmp/spdk-nbd.sock 00:04:46.645 17:51:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62237 ']' 00:04:46.645 17:51:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.645 17:51:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.645 17:51:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.645 17:51:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.645 17:51:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.645 [2024-07-24 17:51:53.491366] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:04:46.645 [2024-07-24 17:51:53.491441] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62237 ] 00:04:46.902 [2024-07-24 17:51:53.624752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.902 [2024-07-24 17:51:53.727999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.902 [2024-07-24 17:51:53.728001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.468 17:51:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.468 17:51:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:47.468 17:51:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.726 Malloc0 00:04:47.984 17:51:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.241 Malloc1 00:04:48.241 17:51:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.241 17:51:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.550 /dev/nbd0 00:04:48.550 17:51:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.550 17:51:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.550 1+0 records in 00:04:48.550 1+0 records out 00:04:48.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485278 s, 8.4 MB/s 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:48.550 17:51:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:48.550 17:51:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.550 17:51:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.550 17:51:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.808 /dev/nbd1 00:04:48.808 17:51:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.808 17:51:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.808 1+0 records in 00:04:48.808 1+0 records out 00:04:48.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424032 s, 9.7 MB/s 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:48.808 17:51:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:48.808 17:51:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.808 17:51:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.808 17:51:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.808 17:51:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.808 17:51:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.066 { 00:04:49.066 "bdev_name": "Malloc0", 00:04:49.066 "nbd_device": "/dev/nbd0" 00:04:49.066 }, 00:04:49.066 { 00:04:49.066 "bdev_name": "Malloc1", 00:04:49.066 "nbd_device": "/dev/nbd1" 00:04:49.066 } 00:04:49.066 ]' 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.066 { 00:04:49.066 "bdev_name": "Malloc0", 00:04:49.066 "nbd_device": "/dev/nbd0" 00:04:49.066 }, 00:04:49.066 { 00:04:49.066 "bdev_name": "Malloc1", 00:04:49.066 "nbd_device": "/dev/nbd1" 00:04:49.066 } 00:04:49.066 ]' 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.066 /dev/nbd1' 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.066 /dev/nbd1' 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.066 17:51:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.066 17:51:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.066 17:51:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.066 17:51:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.066 17:51:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.066 17:51:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.066 17:51:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.066 256+0 records in 00:04:49.066 256+0 records out 00:04:49.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106162 s, 98.8 MB/s 00:04:49.066 17:51:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.066 17:51:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.325 256+0 records in 00:04:49.325 256+0 records out 00:04:49.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030195 s, 34.7 MB/s 00:04:49.325 17:51:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.326 256+0 records in 00:04:49.326 256+0 records out 00:04:49.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327649 s, 32.0 MB/s 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.326 17:51:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.585 17:51:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.844 17:51:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.103 17:51:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.103 17:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.103 17:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.103 17:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.363 17:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.363 17:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.363 17:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.363 17:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.363 17:51:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.363 17:51:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.363 17:51:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.363 17:51:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.363 17:51:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.621 17:51:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.621 [2024-07-24 17:51:57.582145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.879 [2024-07-24 17:51:57.682139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.879 [2024-07-24 17:51:57.682147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.879 [2024-07-24 17:51:57.724638] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.879 [2024-07-24 17:51:57.724692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.455 spdk_app_start Round 1 00:04:53.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.455 17:52:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.455 17:52:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:53.455 17:52:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62237 /var/tmp/spdk-nbd.sock 00:04:53.456 17:52:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62237 ']' 00:04:53.456 17:52:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.456 17:52:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.456 17:52:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.456 17:52:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.456 17:52:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.025 17:52:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.025 17:52:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:54.025 17:52:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.025 Malloc0 00:04:54.025 17:52:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.284 Malloc1 00:04:54.284 17:52:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.284 17:52:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.544 /dev/nbd0 00:04:54.544 17:52:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.544 17:52:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.544 1+0 records in 00:04:54.544 1+0 records out 00:04:54.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268395 s, 15.3 MB/s 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:54.544 17:52:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:54.544 17:52:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.544 17:52:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.544 17:52:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.803 /dev/nbd1 00:04:55.061 17:52:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.061 17:52:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.062 1+0 records in 00:04:55.062 1+0 records out 00:04:55.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382179 s, 10.7 MB/s 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:55.062 17:52:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:55.062 17:52:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.062 17:52:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.062 17:52:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.062 17:52:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.062 17:52:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.320 { 00:04:55.320 "bdev_name": "Malloc0", 00:04:55.320 "nbd_device": "/dev/nbd0" 00:04:55.320 }, 00:04:55.320 { 00:04:55.320 "bdev_name": "Malloc1", 00:04:55.320 "nbd_device": "/dev/nbd1" 00:04:55.320 } 00:04:55.320 ]' 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.320 { 00:04:55.320 "bdev_name": "Malloc0", 00:04:55.320 "nbd_device": "/dev/nbd0" 00:04:55.320 }, 00:04:55.320 { 00:04:55.320 "bdev_name": "Malloc1", 00:04:55.320 "nbd_device": "/dev/nbd1" 00:04:55.320 } 00:04:55.320 ]' 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.320 /dev/nbd1' 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.320 /dev/nbd1' 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.320 256+0 records in 00:04:55.320 256+0 records out 00:04:55.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647237 s, 162 MB/s 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.320 256+0 records in 00:04:55.320 256+0 records out 00:04:55.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297907 s, 35.2 MB/s 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.320 256+0 records in 00:04:55.320 256+0 records out 00:04:55.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306155 s, 34.2 MB/s 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.320 17:52:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.321 17:52:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.579 17:52:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.146 17:52:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.146 17:52:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.146 17:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.146 17:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.146 17:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.405 17:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.405 17:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.405 17:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.405 17:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.405 17:52:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.405 17:52:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.405 17:52:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.405 17:52:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.405 17:52:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.663 17:52:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.663 [2024-07-24 17:52:03.595312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.921 [2024-07-24 17:52:03.699709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.921 [2024-07-24 17:52:03.699718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.921 [2024-07-24 17:52:03.744117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.921 [2024-07-24 17:52:03.744170] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.203 spdk_app_start Round 2 00:05:00.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.203 17:52:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.203 17:52:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:00.203 17:52:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62237 /var/tmp/spdk-nbd.sock 00:05:00.203 17:52:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62237 ']' 00:05:00.203 17:52:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.203 17:52:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.203 17:52:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.203 17:52:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.203 17:52:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.203 17:52:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.203 17:52:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:00.203 17:52:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.203 Malloc0 00:05:00.203 17:52:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.461 Malloc1 00:05:00.461 17:52:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.461 17:52:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.719 /dev/nbd0 00:05:00.719 17:52:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.719 17:52:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.719 17:52:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.720 1+0 records in 00:05:00.720 1+0 records out 00:05:00.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267001 s, 15.3 MB/s 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.720 17:52:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.720 17:52:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.720 17:52:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.720 17:52:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.008 /dev/nbd1 00:05:01.008 17:52:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.008 17:52:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.008 17:52:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:01.008 17:52:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:01.008 17:52:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.009 1+0 records in 00:05:01.009 1+0 records out 00:05:01.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402514 s, 10.2 MB/s 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:01.009 17:52:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:01.009 17:52:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.009 17:52:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.009 17:52:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.009 17:52:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.009 17:52:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.280 { 00:05:01.280 "bdev_name": "Malloc0", 00:05:01.280 "nbd_device": "/dev/nbd0" 00:05:01.280 }, 00:05:01.280 { 00:05:01.280 "bdev_name": "Malloc1", 00:05:01.280 "nbd_device": "/dev/nbd1" 00:05:01.280 } 00:05:01.280 ]' 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.280 { 00:05:01.280 "bdev_name": "Malloc0", 00:05:01.280 "nbd_device": "/dev/nbd0" 00:05:01.280 }, 00:05:01.280 { 00:05:01.280 "bdev_name": "Malloc1", 00:05:01.280 "nbd_device": "/dev/nbd1" 00:05:01.280 } 00:05:01.280 ]' 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.280 /dev/nbd1' 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.280 /dev/nbd1' 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.280 256+0 records in 00:05:01.280 256+0 records out 00:05:01.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488496 s, 215 MB/s 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.280 256+0 records in 00:05:01.280 256+0 records out 00:05:01.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030661 s, 34.2 MB/s 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.280 17:52:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.537 256+0 records in 00:05:01.537 256+0 records out 00:05:01.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031795 s, 33.0 MB/s 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.537 17:52:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.795 17:52:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.796 17:52:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.363 17:52:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.363 17:52:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.621 17:52:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.621 [2024-07-24 17:52:09.575822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.880 [2024-07-24 17:52:09.676573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.880 [2024-07-24 17:52:09.676581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.880 [2024-07-24 17:52:09.719362] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.880 [2024-07-24 17:52:09.719414] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.161 17:52:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62237 /var/tmp/spdk-nbd.sock 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62237 ']' 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:06.161 17:52:12 event.app_repeat -- event/event.sh@39 -- # killprocess 62237 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 62237 ']' 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 62237 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62237 00:05:06.161 killing process with pid 62237 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62237' 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@969 -- # kill 62237 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@974 -- # wait 62237 00:05:06.161 spdk_app_start is called in Round 0. 00:05:06.161 Shutdown signal received, stop current app iteration 00:05:06.161 Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 reinitialization... 00:05:06.161 spdk_app_start is called in Round 1. 00:05:06.161 Shutdown signal received, stop current app iteration 00:05:06.161 Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 reinitialization... 00:05:06.161 spdk_app_start is called in Round 2. 00:05:06.161 Shutdown signal received, stop current app iteration 00:05:06.161 Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 reinitialization... 00:05:06.161 spdk_app_start is called in Round 3. 00:05:06.161 Shutdown signal received, stop current app iteration 00:05:06.161 17:52:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:06.161 17:52:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:06.161 00:05:06.161 real 0m19.447s 00:05:06.161 user 0m43.439s 00:05:06.161 sys 0m3.434s 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.161 ************************************ 00:05:06.161 END TEST app_repeat 00:05:06.161 ************************************ 00:05:06.161 17:52:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.161 17:52:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:06.161 17:52:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:06.161 17:52:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.161 17:52:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.161 17:52:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.161 ************************************ 00:05:06.161 START TEST cpu_locks 00:05:06.161 ************************************ 00:05:06.161 17:52:12 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:06.161 * Looking for test storage... 00:05:06.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:06.161 17:52:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:06.161 17:52:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:06.161 17:52:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:06.161 17:52:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:06.161 17:52:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.161 17:52:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.161 17:52:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.161 ************************************ 00:05:06.161 START TEST default_locks 00:05:06.161 ************************************ 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62864 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62864 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 62864 ']' 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.161 17:52:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 [2024-07-24 17:52:13.141387] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:06.457 [2024-07-24 17:52:13.142138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62864 ] 00:05:06.457 [2024-07-24 17:52:13.288177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.457 [2024-07-24 17:52:13.405926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.394 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.394 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:07.394 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62864 00:05:07.394 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62864 00:05:07.394 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62864 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 62864 ']' 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 62864 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62864 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.652 killing process with pid 62864 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62864' 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 62864 00:05:07.652 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 62864 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62864 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62864 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 62864 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 62864 ']' 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.217 ERROR: process (pid: 62864) is no longer running 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.217 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (62864) - No such process 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:08.217 00:05:08.217 real 0m1.831s 00:05:08.217 user 0m1.913s 00:05:08.217 sys 0m0.641s 00:05:08.217 ************************************ 00:05:08.217 END TEST default_locks 00:05:08.217 ************************************ 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.217 17:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.217 17:52:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:08.217 17:52:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.217 17:52:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.218 17:52:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.218 ************************************ 00:05:08.218 START TEST default_locks_via_rpc 00:05:08.218 ************************************ 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62928 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62928 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62928 ']' 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.218 17:52:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.218 [2024-07-24 17:52:15.033382] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:08.218 [2024-07-24 17:52:15.033512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62928 ] 00:05:08.218 [2024-07-24 17:52:15.173832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.476 [2024-07-24 17:52:15.275804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62928 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.044 17:52:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62928 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62928 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 62928 ']' 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 62928 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62928 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.610 killing process with pid 62928 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62928' 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 62928 00:05:09.610 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 62928 00:05:09.868 00:05:09.868 real 0m1.835s 00:05:09.868 user 0m1.971s 00:05:09.868 sys 0m0.616s 00:05:09.868 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.868 17:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.868 ************************************ 00:05:09.868 END TEST default_locks_via_rpc 00:05:09.868 ************************************ 00:05:09.868 17:52:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:09.868 17:52:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.868 17:52:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.868 17:52:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.126 ************************************ 00:05:10.126 START TEST non_locking_app_on_locked_coremask 00:05:10.126 ************************************ 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62997 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62997 /var/tmp/spdk.sock 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 62997 ']' 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.126 17:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.126 [2024-07-24 17:52:16.913678] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:10.126 [2024-07-24 17:52:16.913779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62997 ] 00:05:10.126 [2024-07-24 17:52:17.050915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.384 [2024-07-24 17:52:17.155457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63025 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63025 /var/tmp/spdk2.sock 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63025 ']' 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.951 17:52:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.951 [2024-07-24 17:52:17.902074] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:10.951 [2024-07-24 17:52:17.902170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63025 ] 00:05:11.212 [2024-07-24 17:52:18.046724] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.212 [2024-07-24 17:52:18.046782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.474 [2024-07-24 17:52:18.255957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.040 17:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.040 17:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:12.040 17:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62997 00:05:12.040 17:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62997 00:05:12.040 17:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62997 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 62997 ']' 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 62997 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62997 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.974 killing process with pid 62997 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62997' 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 62997 00:05:12.974 17:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 62997 00:05:13.564 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63025 00:05:13.564 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63025 ']' 00:05:13.564 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63025 00:05:13.564 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:13.564 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.564 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63025 00:05:13.832 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.832 killing process with pid 63025 00:05:13.832 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.832 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63025' 00:05:13.832 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63025 00:05:13.832 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63025 00:05:14.089 00:05:14.089 real 0m4.037s 00:05:14.089 user 0m4.556s 00:05:14.089 sys 0m1.113s 00:05:14.089 ************************************ 00:05:14.089 END TEST non_locking_app_on_locked_coremask 00:05:14.089 ************************************ 00:05:14.089 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.089 17:52:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.089 17:52:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:14.089 17:52:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.089 17:52:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.089 17:52:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.089 ************************************ 00:05:14.089 START TEST locking_app_on_unlocked_coremask 00:05:14.089 ************************************ 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63104 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63104 /var/tmp/spdk.sock 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63104 ']' 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:14.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.089 17:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.089 [2024-07-24 17:52:21.000760] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:14.089 [2024-07-24 17:52:21.000861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63104 ] 00:05:14.346 [2024-07-24 17:52:21.143449] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.346 [2024-07-24 17:52:21.143500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.346 [2024-07-24 17:52:21.247949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63132 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63132 /var/tmp/spdk2.sock 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63132 ']' 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.280 17:52:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.280 [2024-07-24 17:52:21.977041] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:15.280 [2024-07-24 17:52:21.977122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63132 ] 00:05:15.280 [2024-07-24 17:52:22.117749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.537 [2024-07-24 17:52:22.331932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.102 17:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.102 17:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:16.102 17:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63132 00:05:16.102 17:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63132 00:05:16.102 17:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63104 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63104 ']' 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63104 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63104 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.036 killing process with pid 63104 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63104' 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63104 00:05:17.036 17:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63104 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63132 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63132 ']' 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63132 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63132 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.602 killing process with pid 63132 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63132' 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63132 00:05:17.602 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63132 00:05:17.860 00:05:17.860 real 0m3.771s 00:05:17.860 user 0m4.167s 00:05:17.860 sys 0m1.054s 00:05:17.860 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.861 ************************************ 00:05:17.861 END TEST locking_app_on_unlocked_coremask 00:05:17.861 ************************************ 00:05:17.861 17:52:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:17.861 17:52:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.861 17:52:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.861 17:52:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.861 ************************************ 00:05:17.861 START TEST locking_app_on_locked_coremask 00:05:17.861 ************************************ 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63212 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63212 /var/tmp/spdk.sock 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63212 ']' 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.861 17:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.861 [2024-07-24 17:52:24.824310] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:17.861 [2024-07-24 17:52:24.824411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63212 ] 00:05:18.119 [2024-07-24 17:52:24.969399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.119 [2024-07-24 17:52:25.073200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63239 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63239 /var/tmp/spdk2.sock 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63239 /var/tmp/spdk2.sock 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 63239 /var/tmp/spdk2.sock 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63239 ']' 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.053 17:52:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.053 [2024-07-24 17:52:25.856846] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:19.053 [2024-07-24 17:52:25.856974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63239 ] 00:05:19.053 [2024-07-24 17:52:26.002765] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63212 has claimed it. 00:05:19.053 [2024-07-24 17:52:26.002829] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.619 ERROR: process (pid: 63239) is no longer running 00:05:19.619 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63239) - No such process 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63212 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63212 00:05:19.619 17:52:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63212 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63212 ']' 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63212 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63212 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.187 killing process with pid 63212 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63212' 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63212 00:05:20.187 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63212 00:05:20.462 00:05:20.462 real 0m2.654s 00:05:20.462 user 0m3.044s 00:05:20.462 sys 0m0.707s 00:05:20.462 ************************************ 00:05:20.462 END TEST locking_app_on_locked_coremask 00:05:20.462 ************************************ 00:05:20.462 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.462 17:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.720 17:52:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.720 17:52:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.720 17:52:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.720 17:52:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.720 ************************************ 00:05:20.720 START TEST locking_overlapped_coremask 00:05:20.720 ************************************ 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63286 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63286 /var/tmp/spdk.sock 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 63286 ']' 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.720 17:52:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.720 [2024-07-24 17:52:27.539901] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:20.720 [2024-07-24 17:52:27.540025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63286 ] 00:05:20.720 [2024-07-24 17:52:27.687763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.979 [2024-07-24 17:52:27.797662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.979 [2024-07-24 17:52:27.797768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.979 [2024-07-24 17:52:27.797771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.544 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.544 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:21.544 17:52:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63316 00:05:21.544 17:52:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:21.544 17:52:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63316 /var/tmp/spdk2.sock 00:05:21.544 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:21.544 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63316 /var/tmp/spdk2.sock 00:05:21.544 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 63316 /var/tmp/spdk2.sock 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 63316 ']' 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.545 17:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.803 [2024-07-24 17:52:28.554820] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:21.803 [2024-07-24 17:52:28.554925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63316 ] 00:05:21.803 [2024-07-24 17:52:28.701149] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63286 has claimed it. 00:05:21.803 [2024-07-24 17:52:28.705286] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.369 ERROR: process (pid: 63316) is no longer running 00:05:22.369 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63316) - No such process 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.369 17:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63286 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 63286 ']' 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 63286 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63286 00:05:22.370 killing process with pid 63286 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63286' 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 63286 00:05:22.370 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 63286 00:05:22.936 00:05:22.936 real 0m2.191s 00:05:22.936 user 0m6.011s 00:05:22.936 sys 0m0.458s 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.936 ************************************ 00:05:22.936 END TEST locking_overlapped_coremask 00:05:22.936 ************************************ 00:05:22.936 17:52:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:22.936 17:52:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.936 17:52:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.936 17:52:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.936 ************************************ 00:05:22.936 START TEST locking_overlapped_coremask_via_rpc 00:05:22.936 ************************************ 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63368 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63368 /var/tmp/spdk.sock 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63368 ']' 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.936 17:52:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.936 [2024-07-24 17:52:29.758915] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:22.936 [2024-07-24 17:52:29.759018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63368 ] 00:05:22.936 [2024-07-24 17:52:29.896129] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.936 [2024-07-24 17:52:29.896201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.195 [2024-07-24 17:52:30.008125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.195 [2024-07-24 17:52:30.008183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.195 [2024-07-24 17:52:30.008189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.149 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.149 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.149 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:24.149 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63398 00:05:24.149 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63398 /var/tmp/spdk2.sock 00:05:24.149 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63398 ']' 00:05:24.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.150 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.150 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.150 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.150 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.150 17:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.150 [2024-07-24 17:52:30.862275] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:24.150 [2024-07-24 17:52:30.862375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63398 ] 00:05:24.150 [2024-07-24 17:52:31.014602] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.150 [2024-07-24 17:52:31.014972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.408 [2024-07-24 17:52:31.232335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.408 [2024-07-24 17:52:31.236318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.408 [2024-07-24 17:52:31.236322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.973 [2024-07-24 17:52:31.828465] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63368 has claimed it. 00:05:24.973 2024/07/24 17:52:31 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:24.973 request: 00:05:24.973 { 00:05:24.973 "method": "framework_enable_cpumask_locks", 00:05:24.973 "params": {} 00:05:24.973 } 00:05:24.973 Got JSON-RPC error response 00:05:24.973 GoRPCClient: error on JSON-RPC call 00:05:24.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63368 /var/tmp/spdk.sock 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63368 ']' 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.973 17:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63398 /var/tmp/spdk2.sock 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63398 ']' 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.231 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.489 ************************************ 00:05:25.489 END TEST locking_overlapped_coremask_via_rpc 00:05:25.489 ************************************ 00:05:25.489 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.489 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.489 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:25.489 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:25.489 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.489 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.489 00:05:25.489 real 0m2.709s 00:05:25.489 user 0m1.386s 00:05:25.489 sys 0m0.248s 00:05:25.489 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.489 17:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.489 17:52:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:25.489 17:52:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63368 ]] 00:05:25.489 17:52:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63368 00:05:25.489 17:52:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63368 ']' 00:05:25.489 17:52:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63368 00:05:25.489 17:52:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:25.489 17:52:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.489 17:52:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63368 00:05:25.748 killing process with pid 63368 00:05:25.748 17:52:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.748 17:52:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.748 17:52:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63368' 00:05:25.748 17:52:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 63368 00:05:25.748 17:52:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 63368 00:05:26.004 17:52:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63398 ]] 00:05:26.004 17:52:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63398 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63398 ']' 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63398 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63398 00:05:26.004 killing process with pid 63398 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63398' 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 63398 00:05:26.004 17:52:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 63398 00:05:26.262 17:52:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.262 17:52:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:26.262 17:52:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63368 ]] 00:05:26.262 17:52:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63368 00:05:26.262 17:52:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63368 ']' 00:05:26.262 17:52:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63368 00:05:26.262 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (63368) - No such process 00:05:26.262 17:52:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 63368 is not found' 00:05:26.262 Process with pid 63368 is not found 00:05:26.262 17:52:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63398 ]] 00:05:26.262 17:52:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63398 00:05:26.262 17:52:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63398 ']' 00:05:26.262 17:52:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63398 00:05:26.262 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (63398) - No such process 00:05:26.262 Process with pid 63398 is not found 00:05:26.262 17:52:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 63398 is not found' 00:05:26.262 17:52:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.262 00:05:26.262 real 0m20.236s 00:05:26.262 user 0m35.329s 00:05:26.262 sys 0m5.640s 00:05:26.262 17:52:33 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.262 17:52:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.262 ************************************ 00:05:26.262 END TEST cpu_locks 00:05:26.262 ************************************ 00:05:26.521 00:05:26.521 real 0m47.920s 00:05:26.521 user 1m32.202s 00:05:26.521 sys 0m9.958s 00:05:26.521 17:52:33 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.521 17:52:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.521 ************************************ 00:05:26.521 END TEST event 00:05:26.521 ************************************ 00:05:26.521 17:52:33 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:26.521 17:52:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.521 17:52:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.521 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.521 ************************************ 00:05:26.522 START TEST thread 00:05:26.522 ************************************ 00:05:26.522 17:52:33 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:26.522 * Looking for test storage... 00:05:26.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:26.522 17:52:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:26.522 17:52:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:26.522 17:52:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.522 17:52:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.522 ************************************ 00:05:26.522 START TEST thread_poller_perf 00:05:26.522 ************************************ 00:05:26.522 17:52:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:26.522 [2024-07-24 17:52:33.413865] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:26.522 [2024-07-24 17:52:33.414002] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63545 ] 00:05:26.780 [2024-07-24 17:52:33.545406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.780 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:26.780 [2024-07-24 17:52:33.654809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.214 ====================================== 00:05:28.214 busy:2107076214 (cyc) 00:05:28.214 total_run_count: 333000 00:05:28.214 tsc_hz: 2100000000 (cyc) 00:05:28.214 ====================================== 00:05:28.214 poller_cost: 6327 (cyc), 3012 (nsec) 00:05:28.214 00:05:28.214 real 0m1.346s 00:05:28.214 user 0m1.182s 00:05:28.214 sys 0m0.056s 00:05:28.214 17:52:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.214 17:52:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.214 ************************************ 00:05:28.214 END TEST thread_poller_perf 00:05:28.214 ************************************ 00:05:28.214 17:52:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.214 17:52:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:28.214 17:52:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.214 17:52:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.214 ************************************ 00:05:28.214 START TEST thread_poller_perf 00:05:28.214 ************************************ 00:05:28.214 17:52:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.214 [2024-07-24 17:52:34.806212] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:28.214 [2024-07-24 17:52:34.806389] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63581 ] 00:05:28.214 [2024-07-24 17:52:34.947879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.214 [2024-07-24 17:52:35.074616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.214 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:29.587 ====================================== 00:05:29.587 busy:2102301948 (cyc) 00:05:29.587 total_run_count: 4372000 00:05:29.587 tsc_hz: 2100000000 (cyc) 00:05:29.587 ====================================== 00:05:29.587 poller_cost: 480 (cyc), 228 (nsec) 00:05:29.587 00:05:29.587 real 0m1.372s 00:05:29.587 user 0m1.196s 00:05:29.587 sys 0m0.066s 00:05:29.587 17:52:36 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.587 17:52:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.587 ************************************ 00:05:29.587 END TEST thread_poller_perf 00:05:29.587 ************************************ 00:05:29.587 17:52:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:29.587 00:05:29.587 real 0m2.902s 00:05:29.587 user 0m2.454s 00:05:29.587 sys 0m0.237s 00:05:29.587 17:52:36 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.587 17:52:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.587 ************************************ 00:05:29.587 END TEST thread 00:05:29.587 ************************************ 00:05:29.587 17:52:36 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:29.587 17:52:36 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:29.587 17:52:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.587 17:52:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.587 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:29.587 ************************************ 00:05:29.587 START TEST app_cmdline 00:05:29.587 ************************************ 00:05:29.587 17:52:36 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:29.587 * Looking for test storage... 00:05:29.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:29.587 17:52:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:29.587 17:52:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63655 00:05:29.587 17:52:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63655 00:05:29.587 17:52:36 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 63655 ']' 00:05:29.587 17:52:36 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.587 17:52:36 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.587 17:52:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:29.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.587 17:52:36 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.587 17:52:36 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.587 17:52:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.587 [2024-07-24 17:52:36.378908] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:29.587 [2024-07-24 17:52:36.379020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63655 ] 00:05:29.587 [2024-07-24 17:52:36.524526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.846 [2024-07-24 17:52:36.657064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:30.779 { 00:05:30.779 "fields": { 00:05:30.779 "commit": "03a38592a", 00:05:30.779 "major": 24, 00:05:30.779 "minor": 9, 00:05:30.779 "patch": 0, 00:05:30.779 "suffix": "-pre" 00:05:30.779 }, 00:05:30.779 "version": "SPDK v24.09-pre git sha1 03a38592a" 00:05:30.779 } 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:30.779 17:52:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:30.779 17:52:37 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:31.037 2024/07/24 17:52:37 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:05:31.037 request: 00:05:31.037 { 00:05:31.037 "method": "env_dpdk_get_mem_stats", 00:05:31.037 "params": {} 00:05:31.037 } 00:05:31.037 Got JSON-RPC error response 00:05:31.037 GoRPCClient: error on JSON-RPC call 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.037 17:52:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63655 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 63655 ']' 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 63655 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63655 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.037 killing process with pid 63655 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63655' 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@969 -- # kill 63655 00:05:31.037 17:52:37 app_cmdline -- common/autotest_common.sh@974 -- # wait 63655 00:05:31.603 00:05:31.603 real 0m2.033s 00:05:31.603 user 0m2.557s 00:05:31.603 sys 0m0.474s 00:05:31.603 17:52:38 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.603 ************************************ 00:05:31.603 END TEST app_cmdline 00:05:31.603 ************************************ 00:05:31.603 17:52:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:31.603 17:52:38 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:31.603 17:52:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.603 17:52:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.603 17:52:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.603 ************************************ 00:05:31.603 START TEST version 00:05:31.603 ************************************ 00:05:31.603 17:52:38 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:31.603 * Looking for test storage... 00:05:31.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:31.603 17:52:38 version -- app/version.sh@17 -- # get_header_version major 00:05:31.603 17:52:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.603 17:52:38 version -- app/version.sh@14 -- # cut -f2 00:05:31.603 17:52:38 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.603 17:52:38 version -- app/version.sh@17 -- # major=24 00:05:31.603 17:52:38 version -- app/version.sh@18 -- # get_header_version minor 00:05:31.603 17:52:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.603 17:52:38 version -- app/version.sh@14 -- # cut -f2 00:05:31.603 17:52:38 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.603 17:52:38 version -- app/version.sh@18 -- # minor=9 00:05:31.603 17:52:38 version -- app/version.sh@19 -- # get_header_version patch 00:05:31.603 17:52:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.603 17:52:38 version -- app/version.sh@14 -- # cut -f2 00:05:31.603 17:52:38 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.603 17:52:38 version -- app/version.sh@19 -- # patch=0 00:05:31.603 17:52:38 version -- app/version.sh@20 -- # get_header_version suffix 00:05:31.603 17:52:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.603 17:52:38 version -- app/version.sh@14 -- # cut -f2 00:05:31.603 17:52:38 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.603 17:52:38 version -- app/version.sh@20 -- # suffix=-pre 00:05:31.603 17:52:38 version -- app/version.sh@22 -- # version=24.9 00:05:31.603 17:52:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:31.604 17:52:38 version -- app/version.sh@28 -- # version=24.9rc0 00:05:31.604 17:52:38 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:31.604 17:52:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:31.604 17:52:38 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:31.604 17:52:38 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:31.604 00:05:31.604 real 0m0.140s 00:05:31.604 user 0m0.087s 00:05:31.604 sys 0m0.086s 00:05:31.604 17:52:38 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.604 17:52:38 version -- common/autotest_common.sh@10 -- # set +x 00:05:31.604 ************************************ 00:05:31.604 END TEST version 00:05:31.604 ************************************ 00:05:31.604 17:52:38 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:31.604 17:52:38 -- spdk/autotest.sh@202 -- # uname -s 00:05:31.604 17:52:38 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:31.604 17:52:38 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:31.604 17:52:38 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:31.604 17:52:38 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:05:31.604 17:52:38 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:31.604 17:52:38 -- spdk/autotest.sh@264 -- # timing_exit lib 00:05:31.604 17:52:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:31.604 17:52:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.604 17:52:38 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:31.604 17:52:38 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:05:31.604 17:52:38 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:05:31.604 17:52:38 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:05:31.604 17:52:38 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:05:31.604 17:52:38 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:05:31.604 17:52:38 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:31.604 17:52:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:31.604 17:52:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.604 17:52:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.604 ************************************ 00:05:31.604 START TEST nvmf_tcp 00:05:31.604 ************************************ 00:05:31.604 17:52:38 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:31.861 * Looking for test storage... 00:05:31.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:05:31.861 17:52:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:31.861 17:52:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:31.861 17:52:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:31.861 17:52:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:31.861 17:52:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.861 17:52:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.861 ************************************ 00:05:31.861 START TEST nvmf_target_core 00:05:31.861 ************************************ 00:05:31.861 17:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:31.861 * Looking for test storage... 00:05:31.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:05:31.861 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:31.861 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:31.861 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.861 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:31.861 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.861 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.861 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:31.862 ************************************ 00:05:31.862 START TEST nvmf_abort 00:05:31.862 ************************************ 00:05:31.862 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:32.121 * Looking for test storage... 00:05:32.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.121 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:05:32.122 Cannot find device "nvmf_init_br" 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:05:32.122 Cannot find device "nvmf_tgt_br" 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:05:32.122 Cannot find device "nvmf_tgt_br2" 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:05:32.122 Cannot find device "nvmf_init_br" 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:05:32.122 Cannot find device "nvmf_tgt_br" 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:05:32.122 Cannot find device "nvmf_tgt_br2" 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:05:32.122 Cannot find device "nvmf_br" 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:05:32.122 Cannot find device "nvmf_init_if" 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:05:32.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:05:32.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:05:32.122 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:05:32.122 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:05:32.122 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:05:32.122 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:05:32.122 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:05:32.122 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:05:32.122 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:05:32.122 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:05:32.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:32.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:05:32.381 00:05:32.381 --- 10.0.0.2 ping statistics --- 00:05:32.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.381 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:05:32.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:05:32.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:05:32.381 00:05:32.381 --- 10.0.0.3 ping statistics --- 00:05:32.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.381 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:05:32.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:32.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:05:32.381 00:05:32.381 --- 10.0.0.1 ping statistics --- 00:05:32.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.381 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=64022 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 64022 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 64022 ']' 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.381 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.639 [2024-07-24 17:52:39.382434] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:32.639 [2024-07-24 17:52:39.382548] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:32.639 [2024-07-24 17:52:39.524332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.896 [2024-07-24 17:52:39.634215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:32.896 [2024-07-24 17:52:39.634284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:32.896 [2024-07-24 17:52:39.634296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.896 [2024-07-24 17:52:39.634305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.896 [2024-07-24 17:52:39.634313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:32.896 [2024-07-24 17:52:39.634429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.896 [2024-07-24 17:52:39.634809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.896 [2024-07-24 17:52:39.634820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.477 [2024-07-24 17:52:40.377121] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.477 Malloc0 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.477 Delay0 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.477 [2024-07-24 17:52:40.441715] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.477 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.735 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.735 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:33.735 [2024-07-24 17:52:40.625778] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:36.263 Initializing NVMe Controllers 00:05:36.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:36.263 controller IO queue size 128 less than required 00:05:36.263 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:36.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:36.263 Initialization complete. Launching workers. 00:05:36.263 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34349 00:05:36.263 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34410, failed to submit 62 00:05:36.263 success 34353, unsuccess 57, failed 0 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:36.263 rmmod nvme_tcp 00:05:36.263 rmmod nvme_fabrics 00:05:36.263 rmmod nvme_keyring 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 64022 ']' 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 64022 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 64022 ']' 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 64022 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64022 00:05:36.263 killing process with pid 64022 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64022' 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 64022 00:05:36.263 17:52:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 64022 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:05:36.263 00:05:36.263 real 0m4.280s 00:05:36.263 user 0m11.812s 00:05:36.263 sys 0m1.180s 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:36.263 ************************************ 00:05:36.263 END TEST nvmf_abort 00:05:36.263 ************************************ 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:36.263 17:52:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:36.264 ************************************ 00:05:36.264 START TEST nvmf_ns_hotplug_stress 00:05:36.264 ************************************ 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:36.264 * Looking for test storage... 00:05:36.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:05:36.264 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:05:36.522 Cannot find device "nvmf_tgt_br" 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:05:36.522 Cannot find device "nvmf_tgt_br2" 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:05:36.522 Cannot find device "nvmf_tgt_br" 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:05:36.522 Cannot find device "nvmf_tgt_br2" 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:05:36.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:05:36.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:05:36.522 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:05:36.523 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:05:36.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:36.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:05:36.779 00:05:36.779 --- 10.0.0.2 ping statistics --- 00:05:36.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:36.779 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:05:36.779 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:05:36.779 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:05:36.779 00:05:36.779 --- 10.0.0.3 ping statistics --- 00:05:36.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:36.779 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:05:36.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:36.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:05:36.779 00:05:36.779 --- 10.0.0.1 ping statistics --- 00:05:36.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:36.779 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=64296 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:36.779 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 64296 00:05:36.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.780 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 64296 ']' 00:05:36.780 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.780 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.780 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.780 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.780 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:36.780 [2024-07-24 17:52:43.638858] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:05:36.780 [2024-07-24 17:52:43.639079] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:37.037 [2024-07-24 17:52:43.778345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.037 [2024-07-24 17:52:43.883772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:37.037 [2024-07-24 17:52:43.883998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:37.037 [2024-07-24 17:52:43.884100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:37.037 [2024-07-24 17:52:43.884153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:37.037 [2024-07-24 17:52:43.884184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:37.037 [2024-07-24 17:52:43.884716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.037 [2024-07-24 17:52:43.884792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.037 [2024-07-24 17:52:43.884797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.037 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.037 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:37.037 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:37.037 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.037 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:37.295 17:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:37.295 17:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:37.295 17:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:37.552 [2024-07-24 17:52:44.335198] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.552 17:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:37.810 17:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:38.067 [2024-07-24 17:52:44.975967] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:38.067 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:38.324 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:38.581 Malloc0 00:05:38.581 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:38.874 Delay0 00:05:38.874 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.138 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:39.396 NULL1 00:05:39.396 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:39.654 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:39.654 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=64416 00:05:39.654 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:39.654 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.027 Read completed with error (sct=0, sc=11) 00:05:41.027 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.027 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:41.027 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:41.284 true 00:05:41.284 17:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:41.284 17:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.251 17:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.508 17:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:42.508 17:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:42.766 true 00:05:42.766 17:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:42.766 17:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.023 17:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.281 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:43.281 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:43.281 true 00:05:43.281 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:43.281 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.539 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.796 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:43.796 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:44.054 true 00:05:44.054 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:44.054 17:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.427 17:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.427 17:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:45.427 17:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:45.685 true 00:05:45.685 17:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:45.685 17:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.618 17:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.876 17:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:46.876 17:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:47.134 true 00:05:47.134 17:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:47.134 17:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.507 17:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.764 17:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:48.764 17:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:49.021 true 00:05:49.021 17:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:49.021 17:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.954 17:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.954 17:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:49.954 17:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:50.211 true 00:05:50.211 17:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:50.211 17:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.469 17:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.725 17:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:50.725 17:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:50.982 true 00:05:50.982 17:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:50.982 17:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.915 17:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.915 17:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:51.915 17:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:52.173 true 00:05:52.173 17:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:52.173 17:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.431 17:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.690 17:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:52.690 17:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:52.982 true 00:05:52.982 17:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:52.982 17:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.916 17:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.916 17:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:53.916 17:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:54.482 true 00:05:54.482 17:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:54.482 17:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.482 17:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.048 17:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:55.048 17:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:55.306 true 00:05:55.306 17:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:55.306 17:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.564 17:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.822 17:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:55.822 17:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:55.822 true 00:05:55.822 17:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:55.822 17:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.755 17:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.013 17:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:57.013 17:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:57.272 true 00:05:57.272 17:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:57.272 17:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.590 17:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.590 17:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:57.590 17:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:57.849 true 00:05:57.849 17:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:57.849 17:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.786 17:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.045 17:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:59.045 17:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:59.303 true 00:05:59.303 17:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:05:59.303 17:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.561 17:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.820 17:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:59.820 17:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:00.079 true 00:06:00.364 17:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:00.364 17:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.364 17:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.646 17:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:00.646 17:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:00.906 true 00:06:00.906 17:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:00.906 17:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.841 17:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.100 17:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:02.100 17:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:02.666 true 00:06:02.666 17:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:02.666 17:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.041 17:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.041 17:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:04.041 17:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:04.299 true 00:06:04.299 17:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:04.299 17:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.234 17:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.234 17:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:05.234 17:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:05.493 true 00:06:05.493 17:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:05.493 17:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.066 17:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.323 17:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:06.323 17:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:06.587 true 00:06:06.587 17:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:06.587 17:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.845 17:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.410 17:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:07.410 17:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:07.410 true 00:06:07.410 17:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:07.410 17:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.667 17:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.923 17:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:07.923 17:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:08.179 true 00:06:08.179 17:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:08.179 17:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.108 17:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.364 17:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:09.364 17:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:09.621 true 00:06:09.621 17:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:09.621 17:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.878 Initializing NVMe Controllers 00:06:09.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:09.878 Controller IO queue size 128, less than required. 00:06:09.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.878 Controller IO queue size 128, less than required. 00:06:09.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:09.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:09.878 Initialization complete. Launching workers. 00:06:09.878 ======================================================== 00:06:09.878 Latency(us) 00:06:09.878 Device Information : IOPS MiB/s Average min max 00:06:09.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1034.78 0.51 65414.69 3336.51 1027009.35 00:06:09.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11094.05 5.42 11538.43 3450.61 631437.10 00:06:09.878 ======================================================== 00:06:09.878 Total : 12128.83 5.92 16134.92 3336.51 1027009.35 00:06:09.878 00:06:10.135 17:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.392 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:10.392 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:10.648 true 00:06:10.648 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64416 00:06:10.648 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (64416) - No such process 00:06:10.648 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 64416 00:06:10.648 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.904 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.161 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:11.161 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:11.161 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:11.161 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.161 17:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:11.419 null0 00:06:11.419 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.419 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.419 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:11.419 null1 00:06:11.419 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.419 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.419 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:11.676 null2 00:06:11.676 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.677 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.677 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:11.934 null3 00:06:12.192 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.192 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.192 17:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:12.450 null4 00:06:12.450 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.450 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.450 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:12.450 null5 00:06:12.450 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.450 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.450 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:12.708 null6 00:06:12.708 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.708 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.708 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:12.966 null7 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.966 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.967 17:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 65426 65428 65429 65431 65432 65435 65436 65440 00:06:13.225 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.225 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.225 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.225 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.225 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.225 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.225 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.225 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.483 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.742 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.742 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.742 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.742 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.742 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.742 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.742 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.742 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.000 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.001 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.001 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.001 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.289 17:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.289 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.551 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.809 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.066 17:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.066 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.066 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.066 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.066 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.324 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.581 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.839 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.097 17:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.097 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.097 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.097 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.097 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.354 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.613 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.871 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.129 17:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.129 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.129 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.129 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.129 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.129 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.129 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.387 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.644 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.645 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.645 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.645 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.645 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.645 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.645 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.645 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.902 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.903 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.161 17:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.161 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.161 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.161 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:18.419 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:18.420 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:18.420 rmmod nvme_tcp 00:06:18.420 rmmod nvme_fabrics 00:06:18.420 rmmod nvme_keyring 00:06:18.420 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:18.677 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:18.677 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:18.677 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 64296 ']' 00:06:18.677 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 64296 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 64296 ']' 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 64296 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64296 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:18.678 killing process with pid 64296 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64296' 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 64296 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 64296 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.678 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:18.937 00:06:18.937 real 0m42.566s 00:06:18.937 user 3m20.709s 00:06:18.937 sys 0m15.880s 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.937 ************************************ 00:06:18.937 END TEST nvmf_ns_hotplug_stress 00:06:18.937 ************************************ 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:18.937 ************************************ 00:06:18.937 START TEST nvmf_delete_subsystem 00:06:18.937 ************************************ 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:18.937 * Looking for test storage... 00:06:18.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:18.937 Cannot find device "nvmf_tgt_br" 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:18.937 Cannot find device "nvmf_tgt_br2" 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:18.937 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:19.196 Cannot find device "nvmf_tgt_br" 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:19.196 Cannot find device "nvmf_tgt_br2" 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:19.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:19.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:06:19.196 17:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:19.196 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:19.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:06:19.454 00:06:19.454 --- 10.0.0.2 ping statistics --- 00:06:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.454 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:19.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:19.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:06:19.454 00:06:19.454 --- 10.0.0.3 ping statistics --- 00:06:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.454 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:19.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:06:19.454 00:06:19.454 --- 10.0.0.1 ping statistics --- 00:06:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.454 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=66784 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 66784 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 66784 ']' 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.454 17:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.454 [2024-07-24 17:53:26.288607] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:06:19.454 [2024-07-24 17:53:26.288718] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.454 [2024-07-24 17:53:26.428371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.712 [2024-07-24 17:53:26.534258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.712 [2024-07-24 17:53:26.534307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.712 [2024-07-24 17:53:26.534318] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.712 [2024-07-24 17:53:26.534327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.712 [2024-07-24 17:53:26.534335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.712 [2024-07-24 17:53:26.534454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.712 [2024-07-24 17:53:26.534454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.276 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.533 [2024-07-24 17:53:27.259613] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.533 [2024-07-24 17:53:27.275883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.533 NULL1 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.533 Delay0 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=66835 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:20.533 17:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:20.533 [2024-07-24 17:53:27.470369] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:22.432 17:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:22.432 17:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.432 17:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 [2024-07-24 17:53:29.506348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f018800d470 is same with the state(5) to be set 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Read completed with error (sct=0, sc=8) 00:06:22.690 Write completed with error (sct=0, sc=8) 00:06:22.690 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 [2024-07-24 17:53:29.508282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110b910 is same with the state(5) to be set 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 [2024-07-24 17:53:29.508723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0188000c00 is same with the state(5) to be set 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Write completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 Read completed with error (sct=0, sc=8) 00:06:22.691 starting I/O failed: -6 00:06:22.691 [2024-07-24 17:53:29.511082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e390 is same with the state(5) to be set 00:06:22.691 starting I/O failed: -6 00:06:23.626 [2024-07-24 17:53:30.482617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec510 is same with the state(5) to be set 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 [2024-07-24 17:53:30.506005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110fa80 is same with the state(5) to be set 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 [2024-07-24 17:53:30.506488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f018800d7a0 is same with the state(5) to be set 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 [2024-07-24 17:53:30.506644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f018800d000 is same with the state(5) to be set 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Write completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.626 Read completed with error (sct=0, sc=8) 00:06:23.627 Write completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 Write completed with error (sct=0, sc=8) 00:06:23.627 Read completed with error (sct=0, sc=8) 00:06:23.627 [2024-07-24 17:53:30.507881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e6c0 is same with the state(5) to be set 00:06:23.627 Initializing NVMe Controllers 00:06:23.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:23.627 Controller IO queue size 128, less than required. 00:06:23.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:23.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:23.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:23.627 Initialization complete. Launching workers. 00:06:23.627 ======================================================== 00:06:23.627 Latency(us) 00:06:23.627 Device Information : IOPS MiB/s Average min max 00:06:23.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.64 0.09 899305.63 2509.27 1013286.17 00:06:23.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.36 0.08 923844.61 1403.91 1012158.53 00:06:23.627 ======================================================== 00:06:23.627 Total : 345.00 0.17 910498.23 1403.91 1013286.17 00:06:23.627 00:06:23.627 [2024-07-24 17:53:30.508696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ec510 (9): Bad file descriptor 00:06:23.627 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:23.627 17:53:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.627 17:53:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:23.627 17:53:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66835 00:06:23.627 17:53:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:24.194 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:24.194 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66835 00:06:24.194 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (66835) - No such process 00:06:24.194 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 66835 00:06:24.194 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:24.194 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 66835 00:06:24.194 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 66835 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.195 [2024-07-24 17:53:31.032147] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=66881 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66881 00:06:24.195 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.468 [2024-07-24 17:53:31.216604] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:24.727 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.727 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66881 00:06:24.727 17:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.295 17:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.295 17:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66881 00:06:25.295 17:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.861 17:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.861 17:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66881 00:06:25.861 17:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.119 17:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.119 17:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66881 00:06:26.119 17:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.684 17:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.684 17:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66881 00:06:26.684 17:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.250 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.250 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66881 00:06:27.250 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.507 Initializing NVMe Controllers 00:06:27.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:27.507 Controller IO queue size 128, less than required. 00:06:27.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:27.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:27.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:27.507 Initialization complete. Launching workers. 00:06:27.507 ======================================================== 00:06:27.507 Latency(us) 00:06:27.507 Device Information : IOPS MiB/s Average min max 00:06:27.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006828.65 1000229.16 1041530.18 00:06:27.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004491.15 1000151.02 1040658.40 00:06:27.507 ======================================================== 00:06:27.507 Total : 256.00 0.12 1005659.90 1000151.02 1041530.18 00:06:27.507 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66881 00:06:27.765 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (66881) - No such process 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 66881 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:27.765 rmmod nvme_tcp 00:06:27.765 rmmod nvme_fabrics 00:06:27.765 rmmod nvme_keyring 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 66784 ']' 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 66784 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 66784 ']' 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 66784 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66784 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.765 killing process with pid 66784 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66784' 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 66784 00:06:27.765 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 66784 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:28.024 00:06:28.024 real 0m9.194s 00:06:28.024 user 0m27.803s 00:06:28.024 sys 0m2.186s 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.024 ************************************ 00:06:28.024 END TEST nvmf_delete_subsystem 00:06:28.024 ************************************ 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:28.024 ************************************ 00:06:28.024 START TEST nvmf_host_management 00:06:28.024 ************************************ 00:06:28.024 17:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:28.284 * Looking for test storage... 00:06:28.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:28.284 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:28.285 Cannot find device "nvmf_tgt_br" 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:28.285 Cannot find device "nvmf_tgt_br2" 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:28.285 Cannot find device "nvmf_tgt_br" 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:28.285 Cannot find device "nvmf_tgt_br2" 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:28.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:28.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:28.285 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:28.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:06:28.544 00:06:28.544 --- 10.0.0.2 ping statistics --- 00:06:28.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.544 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:28.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:28.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:06:28.544 00:06:28.544 --- 10.0.0.3 ping statistics --- 00:06:28.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.544 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:28.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:06:28.544 00:06:28.544 --- 10.0.0.1 ping statistics --- 00:06:28.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.544 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=67115 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 67115 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67115 ']' 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.544 17:53:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.544 [2024-07-24 17:53:35.505467] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:06:28.545 [2024-07-24 17:53:35.505567] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.803 [2024-07-24 17:53:35.651194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.803 [2024-07-24 17:53:35.770477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.803 [2024-07-24 17:53:35.770539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.803 [2024-07-24 17:53:35.770555] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.803 [2024-07-24 17:53:35.770569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.803 [2024-07-24 17:53:35.770580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.803 [2024-07-24 17:53:35.770782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.803 [2024-07-24 17:53:35.771367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.803 [2024-07-24 17:53:35.771484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:28.803 [2024-07-24 17:53:35.771488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.738 [2024-07-24 17:53:36.403332] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.738 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.739 Malloc0 00:06:29.739 [2024-07-24 17:53:36.480182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67188 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67188 /var/tmp/bdevperf.sock 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67188 ']' 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:29.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:29.739 { 00:06:29.739 "params": { 00:06:29.739 "name": "Nvme$subsystem", 00:06:29.739 "trtype": "$TEST_TRANSPORT", 00:06:29.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:29.739 "adrfam": "ipv4", 00:06:29.739 "trsvcid": "$NVMF_PORT", 00:06:29.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:29.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:29.739 "hdgst": ${hdgst:-false}, 00:06:29.739 "ddgst": ${ddgst:-false} 00:06:29.739 }, 00:06:29.739 "method": "bdev_nvme_attach_controller" 00:06:29.739 } 00:06:29.739 EOF 00:06:29.739 )") 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:29.739 17:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:29.739 "params": { 00:06:29.739 "name": "Nvme0", 00:06:29.739 "trtype": "tcp", 00:06:29.739 "traddr": "10.0.0.2", 00:06:29.739 "adrfam": "ipv4", 00:06:29.739 "trsvcid": "4420", 00:06:29.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:29.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:29.739 "hdgst": false, 00:06:29.739 "ddgst": false 00:06:29.739 }, 00:06:29.739 "method": "bdev_nvme_attach_controller" 00:06:29.739 }' 00:06:29.739 [2024-07-24 17:53:36.582027] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:06:29.739 [2024-07-24 17:53:36.582740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67188 ] 00:06:29.998 [2024-07-24 17:53:36.731461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.998 [2024-07-24 17:53:36.848739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.256 Running I/O for 10 seconds... 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.565 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.825 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:30.825 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:30.825 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:30.825 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:30.825 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:30.825 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:30.825 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.825 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.825 [2024-07-24 17:53:37.545211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.545887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210310 is same with the state(5) to be set 00:06:30.825 [2024-07-24 17:53:37.546000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.825 [2024-07-24 17:53:37.546029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.825 [2024-07-24 17:53:37.546052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.825 [2024-07-24 17:53:37.546063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.825 [2024-07-24 17:53:37.546076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.825 [2024-07-24 17:53:37.546086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.825 [2024-07-24 17:53:37.546098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.825 [2024-07-24 17:53:37.546108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.825 [2024-07-24 17:53:37.546120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.825 [2024-07-24 17:53:37.546130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.546976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.546986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.547000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.547010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.826 [2024-07-24 17:53:37.547033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.826 [2024-07-24 17:53:37.547042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.827 [2024-07-24 17:53:37.547538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:30.827 [2024-07-24 17:53:37.547556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x734820 is same with the state(5) to be set 00:06:30.827 [2024-07-24 17:53:37.547637] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x734820 was disconnected and freed. reset controller. 00:06:30.827 [2024-07-24 17:53:37.549080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:30.827 task offset: 98944 on job bdev=Nvme0n1 fails 00:06:30.827 00:06:30.827 Latency(us) 00:06:30.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:30.827 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:30.827 Job: Nvme0n1 ended in about 0.52 seconds with error 00:06:30.827 Verification LBA range: start 0x0 length 0x400 00:06:30.827 Nvme0n1 : 0.52 1474.62 92.16 122.09 0.00 38950.40 3464.05 41693.38 00:06:30.827 =================================================================================================================== 00:06:30.827 Total : 1474.62 92.16 122.09 0.00 38950.40 3464.05 41693.38 00:06:30.827 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.827 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:30.827 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.827 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.827 [2024-07-24 17:53:37.551659] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.827 [2024-07-24 17:53:37.551699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x734af0 (9): Bad file descriptor 00:06:30.827 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.827 17:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:30.827 [2024-07-24 17:53:37.562451] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67188 00:06:31.762 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67188) - No such process 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:31.762 { 00:06:31.762 "params": { 00:06:31.762 "name": "Nvme$subsystem", 00:06:31.762 "trtype": "$TEST_TRANSPORT", 00:06:31.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:31.762 "adrfam": "ipv4", 00:06:31.762 "trsvcid": "$NVMF_PORT", 00:06:31.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:31.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:31.762 "hdgst": ${hdgst:-false}, 00:06:31.762 "ddgst": ${ddgst:-false} 00:06:31.762 }, 00:06:31.762 "method": "bdev_nvme_attach_controller" 00:06:31.762 } 00:06:31.762 EOF 00:06:31.762 )") 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:31.762 17:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:31.762 "params": { 00:06:31.762 "name": "Nvme0", 00:06:31.762 "trtype": "tcp", 00:06:31.762 "traddr": "10.0.0.2", 00:06:31.762 "adrfam": "ipv4", 00:06:31.762 "trsvcid": "4420", 00:06:31.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:31.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:31.762 "hdgst": false, 00:06:31.762 "ddgst": false 00:06:31.762 }, 00:06:31.762 "method": "bdev_nvme_attach_controller" 00:06:31.762 }' 00:06:31.762 [2024-07-24 17:53:38.629599] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:06:31.762 [2024-07-24 17:53:38.629704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67234 ] 00:06:32.021 [2024-07-24 17:53:38.777457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.021 [2024-07-24 17:53:38.884746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.280 Running I/O for 1 seconds... 00:06:33.214 00:06:33.214 Latency(us) 00:06:33.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.214 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:33.214 Verification LBA range: start 0x0 length 0x400 00:06:33.214 Nvme0n1 : 1.03 1870.32 116.89 0.00 0.00 33610.82 5367.71 30708.30 00:06:33.214 =================================================================================================================== 00:06:33.214 Total : 1870.32 116.89 0.00 0.00 33610.82 5367.71 30708.30 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:33.473 rmmod nvme_tcp 00:06:33.473 rmmod nvme_fabrics 00:06:33.473 rmmod nvme_keyring 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 67115 ']' 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 67115 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 67115 ']' 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 67115 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67115 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:33.473 killing process with pid 67115 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67115' 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 67115 00:06:33.473 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 67115 00:06:33.732 [2024-07-24 17:53:40.619884] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:33.732 00:06:33.732 real 0m5.712s 00:06:33.732 user 0m21.818s 00:06:33.732 sys 0m1.432s 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.732 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.732 ************************************ 00:06:33.732 END TEST nvmf_host_management 00:06:33.732 ************************************ 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.992 ************************************ 00:06:33.992 START TEST nvmf_lvol 00:06:33.992 ************************************ 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:33.992 * Looking for test storage... 00:06:33.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:06:33.992 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:33.993 Cannot find device "nvmf_tgt_br" 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:33.993 Cannot find device "nvmf_tgt_br2" 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:33.993 Cannot find device "nvmf_tgt_br" 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:33.993 Cannot find device "nvmf_tgt_br2" 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:33.993 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:34.253 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:34.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:34.253 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:34.253 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:34.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:34.253 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:34.253 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:34.253 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:34.253 17:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:34.253 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:34.253 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:34.253 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:34.253 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:34.253 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:34.253 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:34.253 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:34.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:06:34.254 00:06:34.254 --- 10.0.0.2 ping statistics --- 00:06:34.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.254 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:34.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:34.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:06:34.254 00:06:34.254 --- 10.0.0.3 ping statistics --- 00:06:34.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.254 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:34.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:06:34.254 00:06:34.254 --- 10.0.0.1 ping statistics --- 00:06:34.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.254 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=67449 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 67449 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 67449 ']' 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.254 17:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:34.513 [2024-07-24 17:53:41.241932] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:06:34.513 [2024-07-24 17:53:41.242015] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.513 [2024-07-24 17:53:41.382203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.771 [2024-07-24 17:53:41.500698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.771 [2024-07-24 17:53:41.501000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.771 [2024-07-24 17:53:41.501086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.771 [2024-07-24 17:53:41.501168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.771 [2024-07-24 17:53:41.501279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.771 [2024-07-24 17:53:41.501492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.771 [2024-07-24 17:53:41.502174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.771 [2024-07-24 17:53:41.502180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.339 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.339 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:35.339 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:35.339 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.339 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.339 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.339 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:35.598 [2024-07-24 17:53:42.495658] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.598 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:35.855 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:35.856 17:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:36.170 17:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:36.170 17:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:36.428 17:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:36.685 17:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0b433e25-3bb3-4014-b39b-c96a051ecd7b 00:06:36.685 17:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b433e25-3bb3-4014-b39b-c96a051ecd7b lvol 20 00:06:36.943 17:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b7134925-3a58-4128-b373-b165da0aa0fc 00:06:36.943 17:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:37.201 17:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b7134925-3a58-4128-b373-b165da0aa0fc 00:06:37.460 17:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:37.460 [2024-07-24 17:53:44.426707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.717 17:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:37.975 17:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67591 00:06:37.975 17:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:37.975 17:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:38.935 17:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b7134925-3a58-4128-b373-b165da0aa0fc MY_SNAPSHOT 00:06:39.192 17:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=57476f87-d41a-4d6d-ba51-a158028ceb14 00:06:39.192 17:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b7134925-3a58-4128-b373-b165da0aa0fc 30 00:06:39.757 17:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 57476f87-d41a-4d6d-ba51-a158028ceb14 MY_CLONE 00:06:40.015 17:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=510a90eb-94cd-420f-95c8-5ebedfd2708b 00:06:40.015 17:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 510a90eb-94cd-420f-95c8-5ebedfd2708b 00:06:40.580 17:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67591 00:06:48.687 Initializing NVMe Controllers 00:06:48.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:48.687 Controller IO queue size 128, less than required. 00:06:48.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:48.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:48.687 Initialization complete. Launching workers. 00:06:48.687 ======================================================== 00:06:48.687 Latency(us) 00:06:48.687 Device Information : IOPS MiB/s Average min max 00:06:48.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10897.00 42.57 11749.26 2075.55 48897.99 00:06:48.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10850.50 42.38 11803.09 3261.45 50677.80 00:06:48.687 ======================================================== 00:06:48.687 Total : 21747.50 84.95 11776.12 2075.55 50677.80 00:06:48.687 00:06:48.687 17:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:48.687 17:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b7134925-3a58-4128-b373-b165da0aa0fc 00:06:48.945 17:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b433e25-3bb3-4014-b39b-c96a051ecd7b 00:06:49.202 17:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:49.202 17:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:49.202 17:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:49.202 17:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:49.202 17:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:06:49.202 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:49.202 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:06:49.202 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:49.202 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:49.202 rmmod nvme_tcp 00:06:49.202 rmmod nvme_fabrics 00:06:49.202 rmmod nvme_keyring 00:06:49.202 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:49.202 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 67449 ']' 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 67449 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 67449 ']' 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 67449 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67449 00:06:49.203 killing process with pid 67449 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67449' 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 67449 00:06:49.203 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 67449 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:49.461 00:06:49.461 real 0m15.625s 00:06:49.461 user 1m4.104s 00:06:49.461 sys 0m5.282s 00:06:49.461 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.462 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.462 ************************************ 00:06:49.462 END TEST nvmf_lvol 00:06:49.462 ************************************ 00:06:49.462 17:53:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:49.462 17:53:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:49.462 17:53:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.462 17:53:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.462 ************************************ 00:06:49.462 START TEST nvmf_lvs_grow 00:06:49.462 ************************************ 00:06:49.462 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:49.721 * Looking for test storage... 00:06:49.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:49.721 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:49.721 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:49.721 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.721 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.721 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.721 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.721 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:49.722 Cannot find device "nvmf_tgt_br" 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:49.722 Cannot find device "nvmf_tgt_br2" 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:49.722 Cannot find device "nvmf_tgt_br" 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:49.722 Cannot find device "nvmf_tgt_br2" 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:49.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:49.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:49.722 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:49.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:06:49.991 00:06:49.991 --- 10.0.0.2 ping statistics --- 00:06:49.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.991 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:49.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:49.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:06:49.991 00:06:49.991 --- 10.0.0.3 ping statistics --- 00:06:49.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.991 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:49.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:06:49.991 00:06:49.991 --- 10.0.0.1 ping statistics --- 00:06:49.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.991 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=67961 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 67961 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 67961 ']' 00:06:49.991 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.992 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:49.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.992 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.992 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.992 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.992 17:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:49.992 [2024-07-24 17:53:56.957102] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:06:49.992 [2024-07-24 17:53:56.957206] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.250 [2024-07-24 17:53:57.104194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.250 [2024-07-24 17:53:57.219628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.250 [2024-07-24 17:53:57.219686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.250 [2024-07-24 17:53:57.219701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.250 [2024-07-24 17:53:57.219714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.250 [2024-07-24 17:53:57.219726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.250 [2024-07-24 17:53:57.219763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.185 17:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.185 17:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:06:51.185 17:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.185 17:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:51.185 17:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.185 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.185 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.443 [2024-07-24 17:53:58.282925] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.443 ************************************ 00:06:51.443 START TEST lvs_grow_clean 00:06:51.443 ************************************ 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:51.443 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:52.009 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:52.009 17:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:52.267 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:06:52.267 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:06:52.267 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:52.525 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:52.525 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:52.525 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b lvol 150 00:06:52.783 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=481c522e-fcf5-4244-b4ec-80333c9f920b 00:06:52.783 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:52.783 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:53.040 [2024-07-24 17:53:59.766992] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:53.040 [2024-07-24 17:53:59.767071] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:53.040 true 00:06:53.040 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:53.040 17:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:06:53.297 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:53.297 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.554 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 481c522e-fcf5-4244-b4ec-80333c9f920b 00:06:53.813 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:53.813 [2024-07-24 17:54:00.739525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.813 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68123 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68123 /var/tmp/bdevperf.sock 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 68123 ']' 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.071 17:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:54.071 [2024-07-24 17:54:00.996765] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:06:54.071 [2024-07-24 17:54:00.996854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68123 ] 00:06:54.329 [2024-07-24 17:54:01.133308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.329 [2024-07-24 17:54:01.271799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.269 17:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.269 17:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:06:55.269 17:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:55.269 Nvme0n1 00:06:55.536 17:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:55.536 [ 00:06:55.536 { 00:06:55.536 "aliases": [ 00:06:55.536 "481c522e-fcf5-4244-b4ec-80333c9f920b" 00:06:55.536 ], 00:06:55.536 "assigned_rate_limits": { 00:06:55.536 "r_mbytes_per_sec": 0, 00:06:55.536 "rw_ios_per_sec": 0, 00:06:55.536 "rw_mbytes_per_sec": 0, 00:06:55.536 "w_mbytes_per_sec": 0 00:06:55.536 }, 00:06:55.536 "block_size": 4096, 00:06:55.536 "claimed": false, 00:06:55.536 "driver_specific": { 00:06:55.536 "mp_policy": "active_passive", 00:06:55.536 "nvme": [ 00:06:55.536 { 00:06:55.536 "ctrlr_data": { 00:06:55.536 "ana_reporting": false, 00:06:55.536 "cntlid": 1, 00:06:55.536 "firmware_revision": "24.09", 00:06:55.536 "model_number": "SPDK bdev Controller", 00:06:55.536 "multi_ctrlr": true, 00:06:55.536 "oacs": { 00:06:55.536 "firmware": 0, 00:06:55.536 "format": 0, 00:06:55.536 "ns_manage": 0, 00:06:55.536 "security": 0 00:06:55.536 }, 00:06:55.536 "serial_number": "SPDK0", 00:06:55.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.536 "vendor_id": "0x8086" 00:06:55.536 }, 00:06:55.536 "ns_data": { 00:06:55.536 "can_share": true, 00:06:55.536 "id": 1 00:06:55.536 }, 00:06:55.536 "trid": { 00:06:55.536 "adrfam": "IPv4", 00:06:55.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.536 "traddr": "10.0.0.2", 00:06:55.536 "trsvcid": "4420", 00:06:55.536 "trtype": "TCP" 00:06:55.536 }, 00:06:55.536 "vs": { 00:06:55.536 "nvme_version": "1.3" 00:06:55.536 } 00:06:55.536 } 00:06:55.536 ] 00:06:55.536 }, 00:06:55.536 "memory_domains": [ 00:06:55.536 { 00:06:55.536 "dma_device_id": "system", 00:06:55.536 "dma_device_type": 1 00:06:55.536 } 00:06:55.536 ], 00:06:55.536 "name": "Nvme0n1", 00:06:55.536 "num_blocks": 38912, 00:06:55.536 "product_name": "NVMe disk", 00:06:55.536 "supported_io_types": { 00:06:55.536 "abort": true, 00:06:55.536 "compare": true, 00:06:55.536 "compare_and_write": true, 00:06:55.536 "copy": true, 00:06:55.536 "flush": true, 00:06:55.536 "get_zone_info": false, 00:06:55.536 "nvme_admin": true, 00:06:55.536 "nvme_io": true, 00:06:55.536 "nvme_io_md": false, 00:06:55.536 "nvme_iov_md": false, 00:06:55.536 "read": true, 00:06:55.536 "reset": true, 00:06:55.536 "seek_data": false, 00:06:55.536 "seek_hole": false, 00:06:55.536 "unmap": true, 00:06:55.536 "write": true, 00:06:55.536 "write_zeroes": true, 00:06:55.536 "zcopy": false, 00:06:55.536 "zone_append": false, 00:06:55.536 "zone_management": false 00:06:55.536 }, 00:06:55.536 "uuid": "481c522e-fcf5-4244-b4ec-80333c9f920b", 00:06:55.536 "zoned": false 00:06:55.536 } 00:06:55.536 ] 00:06:55.536 17:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68170 00:06:55.536 17:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:55.536 17:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:55.795 Running I/O for 10 seconds... 00:06:56.727 Latency(us) 00:06:56.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.727 Nvme0n1 : 1.00 10010.00 39.10 0.00 0.00 0.00 0.00 0.00 00:06:56.727 =================================================================================================================== 00:06:56.727 Total : 10010.00 39.10 0.00 0.00 0.00 0.00 0.00 00:06:56.727 00:06:57.658 17:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:06:57.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.659 Nvme0n1 : 2.00 10029.00 39.18 0.00 0.00 0.00 0.00 0.00 00:06:57.659 =================================================================================================================== 00:06:57.659 Total : 10029.00 39.18 0.00 0.00 0.00 0.00 0.00 00:06:57.659 00:06:57.917 true 00:06:57.917 17:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:06:57.917 17:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:58.205 17:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:58.205 17:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:58.205 17:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68170 00:06:58.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.810 Nvme0n1 : 3.00 10081.67 39.38 0.00 0.00 0.00 0.00 0.00 00:06:58.810 =================================================================================================================== 00:06:58.810 Total : 10081.67 39.38 0.00 0.00 0.00 0.00 0.00 00:06:58.810 00:06:59.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.741 Nvme0n1 : 4.00 10045.25 39.24 0.00 0.00 0.00 0.00 0.00 00:06:59.741 =================================================================================================================== 00:06:59.741 Total : 10045.25 39.24 0.00 0.00 0.00 0.00 0.00 00:06:59.741 00:07:00.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.675 Nvme0n1 : 5.00 9983.80 39.00 0.00 0.00 0.00 0.00 0.00 00:07:00.675 =================================================================================================================== 00:07:00.675 Total : 9983.80 39.00 0.00 0.00 0.00 0.00 0.00 00:07:00.675 00:07:01.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.660 Nvme0n1 : 6.00 9921.50 38.76 0.00 0.00 0.00 0.00 0.00 00:07:01.660 =================================================================================================================== 00:07:01.660 Total : 9921.50 38.76 0.00 0.00 0.00 0.00 0.00 00:07:01.660 00:07:03.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.029 Nvme0n1 : 7.00 9881.29 38.60 0.00 0.00 0.00 0.00 0.00 00:07:03.029 =================================================================================================================== 00:07:03.029 Total : 9881.29 38.60 0.00 0.00 0.00 0.00 0.00 00:07:03.029 00:07:03.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.963 Nvme0n1 : 8.00 9855.75 38.50 0.00 0.00 0.00 0.00 0.00 00:07:03.963 =================================================================================================================== 00:07:03.963 Total : 9855.75 38.50 0.00 0.00 0.00 0.00 0.00 00:07:03.963 00:07:04.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.896 Nvme0n1 : 9.00 9834.56 38.42 0.00 0.00 0.00 0.00 0.00 00:07:04.896 =================================================================================================================== 00:07:04.896 Total : 9834.56 38.42 0.00 0.00 0.00 0.00 0.00 00:07:04.896 00:07:05.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.827 Nvme0n1 : 10.00 9798.40 38.27 0.00 0.00 0.00 0.00 0.00 00:07:05.827 =================================================================================================================== 00:07:05.827 Total : 9798.40 38.27 0.00 0.00 0.00 0.00 0.00 00:07:05.827 00:07:05.827 00:07:05.827 Latency(us) 00:07:05.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.827 Nvme0n1 : 10.01 9804.68 38.30 0.00 0.00 13050.42 4056.99 31706.94 00:07:05.827 =================================================================================================================== 00:07:05.827 Total : 9804.68 38.30 0.00 0.00 13050.42 4056.99 31706.94 00:07:05.827 0 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68123 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 68123 ']' 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 68123 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68123 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:05.827 killing process with pid 68123 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68123' 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 68123 00:07:05.827 Received shutdown signal, test time was about 10.000000 seconds 00:07:05.827 00:07:05.827 Latency(us) 00:07:05.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.827 =================================================================================================================== 00:07:05.827 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:05.827 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 68123 00:07:06.084 17:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.084 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:06.705 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:06.705 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:07:06.705 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:06.705 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:06.705 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:06.977 [2024-07-24 17:54:13.806692] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:06.977 17:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:07:07.236 2024/07/24 17:54:14 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:83d5c817-7776-4487-a3aa-14fb45cf2b6b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:07.236 request: 00:07:07.236 { 00:07:07.236 "method": "bdev_lvol_get_lvstores", 00:07:07.236 "params": { 00:07:07.236 "uuid": "83d5c817-7776-4487-a3aa-14fb45cf2b6b" 00:07:07.236 } 00:07:07.236 } 00:07:07.236 Got JSON-RPC error response 00:07:07.236 GoRPCClient: error on JSON-RPC call 00:07:07.236 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:07.236 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.236 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.236 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.236 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.495 aio_bdev 00:07:07.495 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 481c522e-fcf5-4244-b4ec-80333c9f920b 00:07:07.495 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=481c522e-fcf5-4244-b4ec-80333c9f920b 00:07:07.495 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:07.495 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:07.495 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:07.495 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:07.495 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:07.753 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 481c522e-fcf5-4244-b4ec-80333c9f920b -t 2000 00:07:08.011 [ 00:07:08.011 { 00:07:08.011 "aliases": [ 00:07:08.011 "lvs/lvol" 00:07:08.011 ], 00:07:08.011 "assigned_rate_limits": { 00:07:08.011 "r_mbytes_per_sec": 0, 00:07:08.011 "rw_ios_per_sec": 0, 00:07:08.011 "rw_mbytes_per_sec": 0, 00:07:08.011 "w_mbytes_per_sec": 0 00:07:08.011 }, 00:07:08.011 "block_size": 4096, 00:07:08.011 "claimed": false, 00:07:08.011 "driver_specific": { 00:07:08.011 "lvol": { 00:07:08.011 "base_bdev": "aio_bdev", 00:07:08.011 "clone": false, 00:07:08.011 "esnap_clone": false, 00:07:08.011 "lvol_store_uuid": "83d5c817-7776-4487-a3aa-14fb45cf2b6b", 00:07:08.011 "num_allocated_clusters": 38, 00:07:08.011 "snapshot": false, 00:07:08.011 "thin_provision": false 00:07:08.011 } 00:07:08.011 }, 00:07:08.011 "name": "481c522e-fcf5-4244-b4ec-80333c9f920b", 00:07:08.011 "num_blocks": 38912, 00:07:08.011 "product_name": "Logical Volume", 00:07:08.011 "supported_io_types": { 00:07:08.011 "abort": false, 00:07:08.011 "compare": false, 00:07:08.011 "compare_and_write": false, 00:07:08.011 "copy": false, 00:07:08.011 "flush": false, 00:07:08.011 "get_zone_info": false, 00:07:08.011 "nvme_admin": false, 00:07:08.011 "nvme_io": false, 00:07:08.011 "nvme_io_md": false, 00:07:08.011 "nvme_iov_md": false, 00:07:08.011 "read": true, 00:07:08.011 "reset": true, 00:07:08.011 "seek_data": true, 00:07:08.011 "seek_hole": true, 00:07:08.011 "unmap": true, 00:07:08.011 "write": true, 00:07:08.011 "write_zeroes": true, 00:07:08.011 "zcopy": false, 00:07:08.011 "zone_append": false, 00:07:08.011 "zone_management": false 00:07:08.011 }, 00:07:08.011 "uuid": "481c522e-fcf5-4244-b4ec-80333c9f920b", 00:07:08.011 "zoned": false 00:07:08.011 } 00:07:08.011 ] 00:07:08.011 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:08.011 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:07:08.011 17:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:08.269 17:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:08.269 17:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:08.269 17:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:07:08.527 17:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:08.527 17:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 481c522e-fcf5-4244-b4ec-80333c9f920b 00:07:08.784 17:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 83d5c817-7776-4487-a3aa-14fb45cf2b6b 00:07:09.042 17:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:09.301 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:09.868 ************************************ 00:07:09.868 END TEST lvs_grow_clean 00:07:09.868 ************************************ 00:07:09.868 00:07:09.868 real 0m18.355s 00:07:09.868 user 0m16.890s 00:07:09.868 sys 0m2.924s 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.868 ************************************ 00:07:09.868 START TEST lvs_grow_dirty 00:07:09.868 ************************************ 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:09.868 17:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:10.126 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:10.126 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:10.385 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:10.385 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:10.385 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:10.643 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:10.643 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:10.643 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9849a321-5f49-495a-9baf-ac3ff038cd21 lvol 150 00:07:10.902 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e35405cc-4bd1-46ab-88ef-1c824800a5a2 00:07:10.902 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:10.902 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:11.159 [2024-07-24 17:54:17.938988] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:11.159 [2024-07-24 17:54:17.939057] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:11.159 true 00:07:11.159 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:11.159 17:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:11.416 17:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:11.416 17:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.416 17:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e35405cc-4bd1-46ab-88ef-1c824800a5a2 00:07:11.674 17:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.239 [2024-07-24 17:54:18.919471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.240 17:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68571 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68571 /var/tmp/bdevperf.sock 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68571 ']' 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.240 17:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.240 [2024-07-24 17:54:19.187276] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:12.240 [2024-07-24 17:54:19.187361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68571 ] 00:07:12.497 [2024-07-24 17:54:19.326758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.497 [2024-07-24 17:54:19.452993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.478 17:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.478 17:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:13.478 17:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:13.478 Nvme0n1 00:07:13.478 17:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:13.738 [ 00:07:13.738 { 00:07:13.738 "aliases": [ 00:07:13.738 "e35405cc-4bd1-46ab-88ef-1c824800a5a2" 00:07:13.738 ], 00:07:13.738 "assigned_rate_limits": { 00:07:13.738 "r_mbytes_per_sec": 0, 00:07:13.738 "rw_ios_per_sec": 0, 00:07:13.738 "rw_mbytes_per_sec": 0, 00:07:13.738 "w_mbytes_per_sec": 0 00:07:13.738 }, 00:07:13.738 "block_size": 4096, 00:07:13.738 "claimed": false, 00:07:13.738 "driver_specific": { 00:07:13.738 "mp_policy": "active_passive", 00:07:13.738 "nvme": [ 00:07:13.738 { 00:07:13.738 "ctrlr_data": { 00:07:13.738 "ana_reporting": false, 00:07:13.738 "cntlid": 1, 00:07:13.738 "firmware_revision": "24.09", 00:07:13.738 "model_number": "SPDK bdev Controller", 00:07:13.738 "multi_ctrlr": true, 00:07:13.738 "oacs": { 00:07:13.738 "firmware": 0, 00:07:13.738 "format": 0, 00:07:13.738 "ns_manage": 0, 00:07:13.738 "security": 0 00:07:13.738 }, 00:07:13.738 "serial_number": "SPDK0", 00:07:13.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.738 "vendor_id": "0x8086" 00:07:13.738 }, 00:07:13.738 "ns_data": { 00:07:13.738 "can_share": true, 00:07:13.738 "id": 1 00:07:13.738 }, 00:07:13.738 "trid": { 00:07:13.738 "adrfam": "IPv4", 00:07:13.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.738 "traddr": "10.0.0.2", 00:07:13.738 "trsvcid": "4420", 00:07:13.738 "trtype": "TCP" 00:07:13.738 }, 00:07:13.738 "vs": { 00:07:13.738 "nvme_version": "1.3" 00:07:13.738 } 00:07:13.738 } 00:07:13.738 ] 00:07:13.738 }, 00:07:13.738 "memory_domains": [ 00:07:13.738 { 00:07:13.738 "dma_device_id": "system", 00:07:13.738 "dma_device_type": 1 00:07:13.738 } 00:07:13.738 ], 00:07:13.738 "name": "Nvme0n1", 00:07:13.738 "num_blocks": 38912, 00:07:13.738 "product_name": "NVMe disk", 00:07:13.738 "supported_io_types": { 00:07:13.738 "abort": true, 00:07:13.738 "compare": true, 00:07:13.738 "compare_and_write": true, 00:07:13.738 "copy": true, 00:07:13.738 "flush": true, 00:07:13.738 "get_zone_info": false, 00:07:13.738 "nvme_admin": true, 00:07:13.738 "nvme_io": true, 00:07:13.738 "nvme_io_md": false, 00:07:13.738 "nvme_iov_md": false, 00:07:13.738 "read": true, 00:07:13.738 "reset": true, 00:07:13.738 "seek_data": false, 00:07:13.738 "seek_hole": false, 00:07:13.738 "unmap": true, 00:07:13.738 "write": true, 00:07:13.738 "write_zeroes": true, 00:07:13.738 "zcopy": false, 00:07:13.738 "zone_append": false, 00:07:13.738 "zone_management": false 00:07:13.738 }, 00:07:13.738 "uuid": "e35405cc-4bd1-46ab-88ef-1c824800a5a2", 00:07:13.738 "zoned": false 00:07:13.738 } 00:07:13.738 ] 00:07:13.738 17:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68617 00:07:13.738 17:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:13.738 17:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:13.738 Running I/O for 10 seconds... 00:07:15.113 Latency(us) 00:07:15.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.113 Nvme0n1 : 1.00 10550.00 41.21 0.00 0.00 0.00 0.00 0.00 00:07:15.113 =================================================================================================================== 00:07:15.113 Total : 10550.00 41.21 0.00 0.00 0.00 0.00 0.00 00:07:15.113 00:07:15.678 17:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:15.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.936 Nvme0n1 : 2.00 10229.50 39.96 0.00 0.00 0.00 0.00 0.00 00:07:15.936 =================================================================================================================== 00:07:15.936 Total : 10229.50 39.96 0.00 0.00 0.00 0.00 0.00 00:07:15.936 00:07:16.195 true 00:07:16.195 17:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:16.195 17:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:16.454 17:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:16.454 17:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:16.454 17:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68617 00:07:16.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.711 Nvme0n1 : 3.00 10004.00 39.08 0.00 0.00 0.00 0.00 0.00 00:07:16.711 =================================================================================================================== 00:07:16.711 Total : 10004.00 39.08 0.00 0.00 0.00 0.00 0.00 00:07:16.711 00:07:18.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.088 Nvme0n1 : 4.00 9837.75 38.43 0.00 0.00 0.00 0.00 0.00 00:07:18.088 =================================================================================================================== 00:07:18.088 Total : 9837.75 38.43 0.00 0.00 0.00 0.00 0.00 00:07:18.088 00:07:19.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.022 Nvme0n1 : 5.00 9711.60 37.94 0.00 0.00 0.00 0.00 0.00 00:07:19.022 =================================================================================================================== 00:07:19.022 Total : 9711.60 37.94 0.00 0.00 0.00 0.00 0.00 00:07:19.022 00:07:19.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.963 Nvme0n1 : 6.00 9647.00 37.68 0.00 0.00 0.00 0.00 0.00 00:07:19.963 =================================================================================================================== 00:07:19.963 Total : 9647.00 37.68 0.00 0.00 0.00 0.00 0.00 00:07:19.963 00:07:20.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.900 Nvme0n1 : 7.00 9663.43 37.75 0.00 0.00 0.00 0.00 0.00 00:07:20.900 =================================================================================================================== 00:07:20.900 Total : 9663.43 37.75 0.00 0.00 0.00 0.00 0.00 00:07:20.900 00:07:21.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.838 Nvme0n1 : 8.00 9472.00 37.00 0.00 0.00 0.00 0.00 0.00 00:07:21.838 =================================================================================================================== 00:07:21.838 Total : 9472.00 37.00 0.00 0.00 0.00 0.00 0.00 00:07:21.838 00:07:22.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.770 Nvme0n1 : 9.00 9377.00 36.63 0.00 0.00 0.00 0.00 0.00 00:07:22.770 =================================================================================================================== 00:07:22.771 Total : 9377.00 36.63 0.00 0.00 0.00 0.00 0.00 00:07:22.771 00:07:23.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.710 Nvme0n1 : 10.00 9204.30 35.95 0.00 0.00 0.00 0.00 0.00 00:07:23.710 =================================================================================================================== 00:07:23.710 Total : 9204.30 35.95 0.00 0.00 0.00 0.00 0.00 00:07:23.710 00:07:23.972 00:07:23.972 Latency(us) 00:07:23.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.972 Nvme0n1 : 10.01 9204.48 35.95 0.00 0.00 13897.05 2418.59 112347.43 00:07:23.972 =================================================================================================================== 00:07:23.972 Total : 9204.48 35.95 0.00 0.00 13897.05 2418.59 112347.43 00:07:23.972 0 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68571 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 68571 ']' 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 68571 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68571 00:07:23.972 killing process with pid 68571 00:07:23.972 Received shutdown signal, test time was about 10.000000 seconds 00:07:23.972 00:07:23.972 Latency(us) 00:07:23.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.972 =================================================================================================================== 00:07:23.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68571' 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 68571 00:07:23.972 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 68571 00:07:24.230 17:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.490 17:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.748 17:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:24.748 17:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:25.004 17:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:25.005 17:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:25.005 17:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 67961 00:07:25.005 17:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 67961 00:07:25.261 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 67961 Killed "${NVMF_APP[@]}" "$@" 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=68784 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 68784 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68784 ']' 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.261 17:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.261 [2024-07-24 17:54:32.054674] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:25.261 [2024-07-24 17:54:32.054754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.261 [2024-07-24 17:54:32.190227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.517 [2024-07-24 17:54:32.307842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.517 [2024-07-24 17:54:32.307902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.517 [2024-07-24 17:54:32.307918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.517 [2024-07-24 17:54:32.307932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.517 [2024-07-24 17:54:32.307946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.517 [2024-07-24 17:54:32.307985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.144 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.144 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:26.144 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.144 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.144 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.144 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.144 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.401 [2024-07-24 17:54:33.301059] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:26.401 [2024-07-24 17:54:33.301296] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:26.401 [2024-07-24 17:54:33.301479] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:26.401 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:26.401 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e35405cc-4bd1-46ab-88ef-1c824800a5a2 00:07:26.401 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e35405cc-4bd1-46ab-88ef-1c824800a5a2 00:07:26.401 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:26.401 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:26.401 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:26.401 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:26.401 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:26.659 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e35405cc-4bd1-46ab-88ef-1c824800a5a2 -t 2000 00:07:26.916 [ 00:07:26.916 { 00:07:26.916 "aliases": [ 00:07:26.916 "lvs/lvol" 00:07:26.916 ], 00:07:26.916 "assigned_rate_limits": { 00:07:26.916 "r_mbytes_per_sec": 0, 00:07:26.916 "rw_ios_per_sec": 0, 00:07:26.916 "rw_mbytes_per_sec": 0, 00:07:26.916 "w_mbytes_per_sec": 0 00:07:26.916 }, 00:07:26.916 "block_size": 4096, 00:07:26.916 "claimed": false, 00:07:26.916 "driver_specific": { 00:07:26.916 "lvol": { 00:07:26.916 "base_bdev": "aio_bdev", 00:07:26.916 "clone": false, 00:07:26.916 "esnap_clone": false, 00:07:26.916 "lvol_store_uuid": "9849a321-5f49-495a-9baf-ac3ff038cd21", 00:07:26.916 "num_allocated_clusters": 38, 00:07:26.916 "snapshot": false, 00:07:26.916 "thin_provision": false 00:07:26.916 } 00:07:26.916 }, 00:07:26.916 "name": "e35405cc-4bd1-46ab-88ef-1c824800a5a2", 00:07:26.916 "num_blocks": 38912, 00:07:26.916 "product_name": "Logical Volume", 00:07:26.916 "supported_io_types": { 00:07:26.916 "abort": false, 00:07:26.916 "compare": false, 00:07:26.916 "compare_and_write": false, 00:07:26.916 "copy": false, 00:07:26.916 "flush": false, 00:07:26.916 "get_zone_info": false, 00:07:26.916 "nvme_admin": false, 00:07:26.916 "nvme_io": false, 00:07:26.916 "nvme_io_md": false, 00:07:26.916 "nvme_iov_md": false, 00:07:26.916 "read": true, 00:07:26.916 "reset": true, 00:07:26.916 "seek_data": true, 00:07:26.916 "seek_hole": true, 00:07:26.916 "unmap": true, 00:07:26.916 "write": true, 00:07:26.916 "write_zeroes": true, 00:07:26.916 "zcopy": false, 00:07:26.916 "zone_append": false, 00:07:26.916 "zone_management": false 00:07:26.916 }, 00:07:26.916 "uuid": "e35405cc-4bd1-46ab-88ef-1c824800a5a2", 00:07:26.916 "zoned": false 00:07:26.916 } 00:07:26.916 ] 00:07:26.916 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:26.916 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:26.916 17:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:27.174 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:27.174 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:27.174 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:27.432 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:27.432 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.690 [2024-07-24 17:54:34.538276] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:27.690 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:27.949 2024/07/24 17:54:34 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:9849a321-5f49-495a-9baf-ac3ff038cd21], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:27.949 request: 00:07:27.949 { 00:07:27.949 "method": "bdev_lvol_get_lvstores", 00:07:27.949 "params": { 00:07:27.949 "uuid": "9849a321-5f49-495a-9baf-ac3ff038cd21" 00:07:27.949 } 00:07:27.949 } 00:07:27.949 Got JSON-RPC error response 00:07:27.949 GoRPCClient: error on JSON-RPC call 00:07:27.949 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:27.949 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.949 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.949 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.949 17:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.207 aio_bdev 00:07:28.207 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e35405cc-4bd1-46ab-88ef-1c824800a5a2 00:07:28.207 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e35405cc-4bd1-46ab-88ef-1c824800a5a2 00:07:28.207 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:28.207 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:28.207 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:28.207 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:28.207 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:28.465 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e35405cc-4bd1-46ab-88ef-1c824800a5a2 -t 2000 00:07:28.759 [ 00:07:28.759 { 00:07:28.759 "aliases": [ 00:07:28.759 "lvs/lvol" 00:07:28.759 ], 00:07:28.759 "assigned_rate_limits": { 00:07:28.759 "r_mbytes_per_sec": 0, 00:07:28.759 "rw_ios_per_sec": 0, 00:07:28.759 "rw_mbytes_per_sec": 0, 00:07:28.759 "w_mbytes_per_sec": 0 00:07:28.759 }, 00:07:28.759 "block_size": 4096, 00:07:28.759 "claimed": false, 00:07:28.759 "driver_specific": { 00:07:28.759 "lvol": { 00:07:28.759 "base_bdev": "aio_bdev", 00:07:28.759 "clone": false, 00:07:28.759 "esnap_clone": false, 00:07:28.759 "lvol_store_uuid": "9849a321-5f49-495a-9baf-ac3ff038cd21", 00:07:28.759 "num_allocated_clusters": 38, 00:07:28.759 "snapshot": false, 00:07:28.759 "thin_provision": false 00:07:28.759 } 00:07:28.759 }, 00:07:28.759 "name": "e35405cc-4bd1-46ab-88ef-1c824800a5a2", 00:07:28.759 "num_blocks": 38912, 00:07:28.759 "product_name": "Logical Volume", 00:07:28.759 "supported_io_types": { 00:07:28.759 "abort": false, 00:07:28.759 "compare": false, 00:07:28.759 "compare_and_write": false, 00:07:28.759 "copy": false, 00:07:28.759 "flush": false, 00:07:28.759 "get_zone_info": false, 00:07:28.759 "nvme_admin": false, 00:07:28.759 "nvme_io": false, 00:07:28.759 "nvme_io_md": false, 00:07:28.759 "nvme_iov_md": false, 00:07:28.759 "read": true, 00:07:28.759 "reset": true, 00:07:28.759 "seek_data": true, 00:07:28.759 "seek_hole": true, 00:07:28.759 "unmap": true, 00:07:28.759 "write": true, 00:07:28.759 "write_zeroes": true, 00:07:28.759 "zcopy": false, 00:07:28.759 "zone_append": false, 00:07:28.759 "zone_management": false 00:07:28.759 }, 00:07:28.759 "uuid": "e35405cc-4bd1-46ab-88ef-1c824800a5a2", 00:07:28.759 "zoned": false 00:07:28.759 } 00:07:28.759 ] 00:07:28.759 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:28.759 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:28.759 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:29.018 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:29.018 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:29.018 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:29.018 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:29.018 17:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e35405cc-4bd1-46ab-88ef-1c824800a5a2 00:07:29.276 17:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9849a321-5f49-495a-9baf-ac3ff038cd21 00:07:29.533 17:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:29.789 17:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.354 ************************************ 00:07:30.354 END TEST lvs_grow_dirty 00:07:30.354 ************************************ 00:07:30.354 00:07:30.354 real 0m20.395s 00:07:30.354 user 0m41.523s 00:07:30.354 sys 0m7.949s 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:30.354 nvmf_trace.0 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.354 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.611 rmmod nvme_tcp 00:07:30.611 rmmod nvme_fabrics 00:07:30.611 rmmod nvme_keyring 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 68784 ']' 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 68784 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 68784 ']' 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 68784 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68784 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.611 killing process with pid 68784 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68784' 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 68784 00:07:30.611 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 68784 00:07:30.869 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.869 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.869 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.869 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.869 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.869 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.869 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:30.870 ************************************ 00:07:30.870 END TEST nvmf_lvs_grow 00:07:30.870 ************************************ 00:07:30.870 00:07:30.870 real 0m41.246s 00:07:30.870 user 1m4.712s 00:07:30.870 sys 0m11.551s 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.870 ************************************ 00:07:30.870 START TEST nvmf_bdev_io_wait 00:07:30.870 ************************************ 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:30.870 * Looking for test storage... 00:07:30.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.870 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:30.871 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:31.181 Cannot find device "nvmf_tgt_br" 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.181 Cannot find device "nvmf_tgt_br2" 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:31.181 Cannot find device "nvmf_tgt_br" 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:31.181 Cannot find device "nvmf_tgt_br2" 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:31.181 17:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.181 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:31.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:07:31.466 00:07:31.466 --- 10.0.0.2 ping statistics --- 00:07:31.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.466 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:31.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:07:31.466 00:07:31.466 --- 10.0.0.3 ping statistics --- 00:07:31.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.466 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:07:31.466 00:07:31.466 --- 10.0.0.1 ping statistics --- 00:07:31.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.466 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=69195 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 69195 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 69195 ']' 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.466 17:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:31.466 [2024-07-24 17:54:38.232573] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:31.466 [2024-07-24 17:54:38.232680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.466 [2024-07-24 17:54:38.377519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.725 [2024-07-24 17:54:38.485241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.725 [2024-07-24 17:54:38.485299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.725 [2024-07-24 17:54:38.485310] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.725 [2024-07-24 17:54:38.485319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.725 [2024-07-24 17:54:38.485327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.725 [2024-07-24 17:54:38.485481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.725 [2024-07-24 17:54:38.486121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.725 [2024-07-24 17:54:38.486314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.725 [2024-07-24 17:54:38.486323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.292 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.551 [2024-07-24 17:54:39.299700] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.551 Malloc0 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.551 [2024-07-24 17:54:39.355477] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69253 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69256 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69258 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69259 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:32.551 { 00:07:32.551 "params": { 00:07:32.551 "name": "Nvme$subsystem", 00:07:32.551 "trtype": "$TEST_TRANSPORT", 00:07:32.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.551 "adrfam": "ipv4", 00:07:32.551 "trsvcid": "$NVMF_PORT", 00:07:32.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.551 "hdgst": ${hdgst:-false}, 00:07:32.551 "ddgst": ${ddgst:-false} 00:07:32.551 }, 00:07:32.551 "method": "bdev_nvme_attach_controller" 00:07:32.551 } 00:07:32.551 EOF 00:07:32.551 )") 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:32.551 { 00:07:32.551 "params": { 00:07:32.551 "name": "Nvme$subsystem", 00:07:32.551 "trtype": "$TEST_TRANSPORT", 00:07:32.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.551 "adrfam": "ipv4", 00:07:32.551 "trsvcid": "$NVMF_PORT", 00:07:32.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.551 "hdgst": ${hdgst:-false}, 00:07:32.551 "ddgst": ${ddgst:-false} 00:07:32.551 }, 00:07:32.551 "method": "bdev_nvme_attach_controller" 00:07:32.551 } 00:07:32.551 EOF 00:07:32.551 )") 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:32.551 { 00:07:32.551 "params": { 00:07:32.551 "name": "Nvme$subsystem", 00:07:32.551 "trtype": "$TEST_TRANSPORT", 00:07:32.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.551 "adrfam": "ipv4", 00:07:32.551 "trsvcid": "$NVMF_PORT", 00:07:32.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.551 "hdgst": ${hdgst:-false}, 00:07:32.551 "ddgst": ${ddgst:-false} 00:07:32.551 }, 00:07:32.551 "method": "bdev_nvme_attach_controller" 00:07:32.551 } 00:07:32.551 EOF 00:07:32.551 )") 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:32.551 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:32.552 "params": { 00:07:32.552 "name": "Nvme1", 00:07:32.552 "trtype": "tcp", 00:07:32.552 "traddr": "10.0.0.2", 00:07:32.552 "adrfam": "ipv4", 00:07:32.552 "trsvcid": "4420", 00:07:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:32.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:32.552 "hdgst": false, 00:07:32.552 "ddgst": false 00:07:32.552 }, 00:07:32.552 "method": "bdev_nvme_attach_controller" 00:07:32.552 }' 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:32.552 { 00:07:32.552 "params": { 00:07:32.552 "name": "Nvme$subsystem", 00:07:32.552 "trtype": "$TEST_TRANSPORT", 00:07:32.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.552 "adrfam": "ipv4", 00:07:32.552 "trsvcid": "$NVMF_PORT", 00:07:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.552 "hdgst": ${hdgst:-false}, 00:07:32.552 "ddgst": ${ddgst:-false} 00:07:32.552 }, 00:07:32.552 "method": "bdev_nvme_attach_controller" 00:07:32.552 } 00:07:32.552 EOF 00:07:32.552 )") 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:32.552 "params": { 00:07:32.552 "name": "Nvme1", 00:07:32.552 "trtype": "tcp", 00:07:32.552 "traddr": "10.0.0.2", 00:07:32.552 "adrfam": "ipv4", 00:07:32.552 "trsvcid": "4420", 00:07:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:32.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:32.552 "hdgst": false, 00:07:32.552 "ddgst": false 00:07:32.552 }, 00:07:32.552 "method": "bdev_nvme_attach_controller" 00:07:32.552 }' 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:32.552 "params": { 00:07:32.552 "name": "Nvme1", 00:07:32.552 "trtype": "tcp", 00:07:32.552 "traddr": "10.0.0.2", 00:07:32.552 "adrfam": "ipv4", 00:07:32.552 "trsvcid": "4420", 00:07:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:32.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:32.552 "hdgst": false, 00:07:32.552 "ddgst": false 00:07:32.552 }, 00:07:32.552 "method": "bdev_nvme_attach_controller" 00:07:32.552 }' 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:32.552 "params": { 00:07:32.552 "name": "Nvme1", 00:07:32.552 "trtype": "tcp", 00:07:32.552 "traddr": "10.0.0.2", 00:07:32.552 "adrfam": "ipv4", 00:07:32.552 "trsvcid": "4420", 00:07:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:32.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:32.552 "hdgst": false, 00:07:32.552 "ddgst": false 00:07:32.552 }, 00:07:32.552 "method": "bdev_nvme_attach_controller" 00:07:32.552 }' 00:07:32.552 [2024-07-24 17:54:39.428886] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:32.552 [2024-07-24 17:54:39.428979] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:32.552 [2024-07-24 17:54:39.432155] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:32.552 [2024-07-24 17:54:39.432273] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:32.552 [2024-07-24 17:54:39.438347] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:32.552 [2024-07-24 17:54:39.438428] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:32.552 17:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69253 00:07:32.552 [2024-07-24 17:54:39.442407] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:32.552 [2024-07-24 17:54:39.442863] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:32.810 [2024-07-24 17:54:39.630366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.810 [2024-07-24 17:54:39.693339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.810 [2024-07-24 17:54:39.713572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:32.810 [2024-07-24 17:54:39.750134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.810 [2024-07-24 17:54:39.775752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:33.069 [2024-07-24 17:54:39.811969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.069 [2024-07-24 17:54:39.835008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:33.069 Running I/O for 1 seconds... 00:07:33.069 [2024-07-24 17:54:39.896047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:33.069 Running I/O for 1 seconds... 00:07:33.069 Running I/O for 1 seconds... 00:07:33.069 Running I/O for 1 seconds... 00:07:34.032 00:07:34.032 Latency(us) 00:07:34.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.032 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:34.032 Nvme1n1 : 1.00 203055.83 793.19 0.00 0.00 628.02 267.22 912.82 00:07:34.032 =================================================================================================================== 00:07:34.032 Total : 203055.83 793.19 0.00 0.00 628.02 267.22 912.82 00:07:34.032 00:07:34.032 Latency(us) 00:07:34.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.032 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:34.032 Nvme1n1 : 1.01 12061.49 47.12 0.00 0.00 10577.25 5835.82 18350.08 00:07:34.032 =================================================================================================================== 00:07:34.032 Total : 12061.49 47.12 0.00 0.00 10577.25 5835.82 18350.08 00:07:34.032 00:07:34.032 Latency(us) 00:07:34.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.032 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:34.032 Nvme1n1 : 1.01 8690.49 33.95 0.00 0.00 14662.62 7957.94 24591.60 00:07:34.032 =================================================================================================================== 00:07:34.032 Total : 8690.49 33.95 0.00 0.00 14662.62 7957.94 24591.60 00:07:34.291 00:07:34.291 Latency(us) 00:07:34.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.291 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:34.291 Nvme1n1 : 1.01 8081.51 31.57 0.00 0.00 15771.83 7989.15 25590.25 00:07:34.291 =================================================================================================================== 00:07:34.291 Total : 8081.51 31.57 0.00 0.00 15771.83 7989.15 25590.25 00:07:34.291 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69256 00:07:34.550 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69258 00:07:34.550 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69259 00:07:34.550 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.550 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.550 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.551 rmmod nvme_tcp 00:07:34.551 rmmod nvme_fabrics 00:07:34.551 rmmod nvme_keyring 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 69195 ']' 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 69195 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 69195 ']' 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 69195 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69195 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69195' 00:07:34.551 killing process with pid 69195 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 69195 00:07:34.551 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 69195 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:34.809 00:07:34.809 real 0m3.927s 00:07:34.809 user 0m17.201s 00:07:34.809 sys 0m2.072s 00:07:34.809 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:34.810 ************************************ 00:07:34.810 END TEST nvmf_bdev_io_wait 00:07:34.810 ************************************ 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.810 ************************************ 00:07:34.810 START TEST nvmf_queue_depth 00:07:34.810 ************************************ 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:34.810 * Looking for test storage... 00:07:34.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.810 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:35.069 Cannot find device "nvmf_tgt_br" 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:35.069 Cannot find device "nvmf_tgt_br2" 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:35.069 Cannot find device "nvmf_tgt_br" 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:35.069 Cannot find device "nvmf_tgt_br2" 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:35.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:35.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:35.069 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:35.070 17:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:35.070 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:35.070 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:35.070 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:35.070 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:35.070 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:35.070 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:35.070 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:35.070 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:35.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:07:35.329 00:07:35.329 --- 10.0.0.2 ping statistics --- 00:07:35.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.329 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:35.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:35.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:07:35.329 00:07:35.329 --- 10.0.0.3 ping statistics --- 00:07:35.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.329 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:35.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:35.329 00:07:35.329 --- 10.0.0.1 ping statistics --- 00:07:35.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.329 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=69488 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 69488 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 69488 ']' 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.329 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.329 [2024-07-24 17:54:42.221861] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:35.329 [2024-07-24 17:54:42.221972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.590 [2024-07-24 17:54:42.360869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.590 [2024-07-24 17:54:42.465341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.590 [2024-07-24 17:54:42.465392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.590 [2024-07-24 17:54:42.465403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.590 [2024-07-24 17:54:42.465412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.590 [2024-07-24 17:54:42.465419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.590 [2024-07-24 17:54:42.465448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.538 [2024-07-24 17:54:43.190143] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.538 Malloc0 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.538 [2024-07-24 17:54:43.263647] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69538 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69538 /var/tmp/bdevperf.sock 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 69538 ']' 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.538 17:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.538 [2024-07-24 17:54:43.326321] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:36.538 [2024-07-24 17:54:43.326408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69538 ] 00:07:36.538 [2024-07-24 17:54:43.473362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.804 [2024-07-24 17:54:43.576709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.368 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.368 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:37.368 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:37.368 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.368 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.672 NVMe0n1 00:07:37.672 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.672 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:37.672 Running I/O for 10 seconds... 00:07:47.650 00:07:47.650 Latency(us) 00:07:47.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.650 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:47.650 Verification LBA range: start 0x0 length 0x4000 00:07:47.650 NVMe0n1 : 10.06 10018.55 39.13 0.00 0.00 101795.89 21096.35 108852.18 00:07:47.650 =================================================================================================================== 00:07:47.650 Total : 10018.55 39.13 0.00 0.00 101795.89 21096.35 108852.18 00:07:47.650 0 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69538 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 69538 ']' 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 69538 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69538 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69538' 00:07:47.650 killing process with pid 69538 00:07:47.650 Received shutdown signal, test time was about 10.000000 seconds 00:07:47.650 00:07:47.650 Latency(us) 00:07:47.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.650 =================================================================================================================== 00:07:47.650 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 69538 00:07:47.650 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 69538 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.909 rmmod nvme_tcp 00:07:47.909 rmmod nvme_fabrics 00:07:47.909 rmmod nvme_keyring 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 69488 ']' 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 69488 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 69488 ']' 00:07:47.909 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 69488 00:07:47.910 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:47.910 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.167 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69488 00:07:48.167 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:48.167 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:48.167 killing process with pid 69488 00:07:48.167 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69488' 00:07:48.167 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 69488 00:07:48.167 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 69488 00:07:48.167 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.167 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.167 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.167 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.167 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.167 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.167 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.167 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.426 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:48.426 00:07:48.426 real 0m13.463s 00:07:48.426 user 0m23.063s 00:07:48.426 sys 0m2.266s 00:07:48.426 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.426 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.426 ************************************ 00:07:48.426 END TEST nvmf_queue_depth 00:07:48.426 ************************************ 00:07:48.426 17:54:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:48.426 17:54:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:48.426 17:54:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.426 17:54:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.426 ************************************ 00:07:48.426 START TEST nvmf_target_multipath 00:07:48.426 ************************************ 00:07:48.426 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:48.426 * Looking for test storage... 00:07:48.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:48.427 Cannot find device "nvmf_tgt_br" 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.427 Cannot find device "nvmf_tgt_br2" 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:48.427 Cannot find device "nvmf_tgt_br" 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:07:48.427 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:48.686 Cannot find device "nvmf_tgt_br2" 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:48.686 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:48.945 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:48.945 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:48.945 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:48.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:07:48.945 00:07:48.945 --- 10.0.0.2 ping statistics --- 00:07:48.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.945 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:48.945 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:48.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:48.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:07:48.945 00:07:48.945 --- 10.0.0.3 ping statistics --- 00:07:48.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.945 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:48.945 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:48.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:48.945 00:07:48.945 --- 10.0.0.1 ping statistics --- 00:07:48.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.945 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:48.945 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=69869 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 69869 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 69869 ']' 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.946 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:48.946 [2024-07-24 17:54:55.759617] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:07:48.946 [2024-07-24 17:54:55.760128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.946 [2024-07-24 17:54:55.894955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.205 [2024-07-24 17:54:56.000587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.205 [2024-07-24 17:54:56.000643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.205 [2024-07-24 17:54:56.000653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.205 [2024-07-24 17:54:56.000661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.205 [2024-07-24 17:54:56.000684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.205 [2024-07-24 17:54:56.000967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.205 [2024-07-24 17:54:56.001339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.205 [2024-07-24 17:54:56.001402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.205 [2024-07-24 17:54:56.001413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.772 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.772 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:07:49.772 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.772 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.772 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:50.031 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.031 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:50.031 [2024-07-24 17:54:56.952184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.031 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:07:50.598 Malloc0 00:07:50.598 17:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:07:50.598 17:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.940 17:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.201 [2024-07-24 17:54:58.021133] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.201 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:51.462 [2024-07-24 17:54:58.237365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:51.462 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:07:51.721 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:07:51.721 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:07:51.721 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:07:51.721 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:51.721 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:51.721 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=70010 00:07:54.253 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:07:54.253 [global] 00:07:54.253 thread=1 00:07:54.253 invalidate=1 00:07:54.253 rw=randrw 00:07:54.253 time_based=1 00:07:54.253 runtime=6 00:07:54.253 ioengine=libaio 00:07:54.253 direct=1 00:07:54.253 bs=4096 00:07:54.253 iodepth=128 00:07:54.253 norandommap=0 00:07:54.253 numjobs=1 00:07:54.253 00:07:54.253 verify_dump=1 00:07:54.253 verify_backlog=512 00:07:54.253 verify_state_save=0 00:07:54.253 do_verify=1 00:07:54.253 verify=crc32c-intel 00:07:54.253 [job0] 00:07:54.253 filename=/dev/nvme0n1 00:07:54.253 Could not set queue depth (nvme0n1) 00:07:54.253 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:54.253 fio-3.35 00:07:54.253 Starting 1 thread 00:07:54.820 17:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:07:55.078 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:55.643 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:07:55.643 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:55.644 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:07:56.576 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:07:56.576 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:56.576 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:56.576 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:07:56.834 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:57.091 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:07:58.026 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:07:58.026 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:58.026 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:58.026 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 70010 00:08:00.640 00:08:00.640 job0: (groupid=0, jobs=1): err= 0: pid=70037: Wed Jul 24 17:55:07 2024 00:08:00.640 read: IOPS=11.4k, BW=44.4MiB/s (46.5MB/s)(266MiB/6004msec) 00:08:00.640 slat (usec): min=4, max=7471, avg=49.69, stdev=220.36 00:08:00.640 clat (usec): min=692, max=15699, avg=7611.70, stdev=1332.73 00:08:00.640 lat (usec): min=738, max=15711, avg=7661.40, stdev=1342.90 00:08:00.640 clat percentiles (usec): 00:08:00.640 | 1.00th=[ 4490], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 6783], 00:08:00.640 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7635], 00:08:00.640 | 70.00th=[ 7963], 80.00th=[ 8356], 90.00th=[ 9241], 95.00th=[10290], 00:08:00.640 | 99.00th=[11731], 99.50th=[12518], 99.90th=[14484], 99.95th=[14746], 00:08:00.640 | 99.99th=[15401] 00:08:00.640 bw ( KiB/s): min=10696, max=31936, per=53.09%, avg=24128.73, stdev=6487.54, samples=11 00:08:00.640 iops : min= 2674, max= 7984, avg=6032.18, stdev=1621.88, samples=11 00:08:00.640 write: IOPS=6900, BW=27.0MiB/s (28.3MB/s)(145MiB/5394msec); 0 zone resets 00:08:00.640 slat (usec): min=12, max=4047, avg=59.84, stdev=145.78 00:08:00.640 clat (usec): min=664, max=15294, avg=6507.22, stdev=1150.07 00:08:00.640 lat (usec): min=723, max=15321, avg=6567.06, stdev=1155.51 00:08:00.640 clat percentiles (usec): 00:08:00.640 | 1.00th=[ 3458], 5.00th=[ 4621], 10.00th=[ 5407], 20.00th=[ 5800], 00:08:00.640 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6652], 00:08:00.640 | 70.00th=[ 6849], 80.00th=[ 7111], 90.00th=[ 7701], 95.00th=[ 8586], 00:08:00.640 | 99.00th=[10028], 99.50th=[11076], 99.90th=[12649], 99.95th=[12911], 00:08:00.640 | 99.99th=[14877] 00:08:00.640 bw ( KiB/s): min=11376, max=31296, per=87.52%, avg=24160.00, stdev=6283.80, samples=11 00:08:00.640 iops : min= 2844, max= 7824, avg=6040.00, stdev=1570.95, samples=11 00:08:00.640 lat (usec) : 750=0.01%, 1000=0.01% 00:08:00.640 lat (msec) : 2=0.04%, 4=0.98%, 10=94.83%, 20=4.14% 00:08:00.640 cpu : usr=5.45%, sys=24.12%, ctx=6896, majf=0, minf=121 00:08:00.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:00.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:00.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:00.640 issued rwts: total=68220,37223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:00.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:00.640 00:08:00.640 Run status group 0 (all jobs): 00:08:00.640 READ: bw=44.4MiB/s (46.5MB/s), 44.4MiB/s-44.4MiB/s (46.5MB/s-46.5MB/s), io=266MiB (279MB), run=6004-6004msec 00:08:00.640 WRITE: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=145MiB (152MB), run=5394-5394msec 00:08:00.640 00:08:00.640 Disk stats (read/write): 00:08:00.640 nvme0n1: ios=67493/36230, merge=0/0, ticks=479787/219274, in_queue=699061, util=98.58% 00:08:00.640 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:00.640 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:08:00.899 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:01.834 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:01.834 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:01.834 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:01.834 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:01.834 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70171 00:08:01.834 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:01.834 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:01.834 [global] 00:08:01.834 thread=1 00:08:01.834 invalidate=1 00:08:01.834 rw=randrw 00:08:01.834 time_based=1 00:08:01.834 runtime=6 00:08:01.834 ioengine=libaio 00:08:01.834 direct=1 00:08:01.834 bs=4096 00:08:01.834 iodepth=128 00:08:01.834 norandommap=0 00:08:01.834 numjobs=1 00:08:01.834 00:08:01.834 verify_dump=1 00:08:01.834 verify_backlog=512 00:08:01.834 verify_state_save=0 00:08:01.834 do_verify=1 00:08:01.834 verify=crc32c-intel 00:08:01.834 [job0] 00:08:01.834 filename=/dev/nvme0n1 00:08:01.834 Could not set queue depth (nvme0n1) 00:08:02.092 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:02.092 fio-3.35 00:08:02.092 Starting 1 thread 00:08:03.114 17:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:03.114 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:03.683 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:04.619 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:04.619 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:04.619 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:04.619 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:04.878 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:05.137 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:06.071 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:06.071 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:06.071 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:06.071 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70171 00:08:08.600 00:08:08.600 job0: (groupid=0, jobs=1): err= 0: pid=70192: Wed Jul 24 17:55:15 2024 00:08:08.600 read: IOPS=12.9k, BW=50.5MiB/s (52.9MB/s)(303MiB/6005msec) 00:08:08.600 slat (usec): min=4, max=5274, avg=39.50, stdev=196.23 00:08:08.600 clat (usec): min=480, max=47823, avg=6908.81, stdev=1502.32 00:08:08.600 lat (usec): min=515, max=47838, avg=6948.30, stdev=1517.61 00:08:08.600 clat percentiles (usec): 00:08:08.600 | 1.00th=[ 3359], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5735], 00:08:08.600 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:08:08.600 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 9241], 00:08:08.600 | 99.00th=[11076], 99.50th=[11600], 99.90th=[14484], 99.95th=[15270], 00:08:08.600 | 99.99th=[16909] 00:08:08.600 bw ( KiB/s): min=15536, max=41400, per=52.80%, avg=27282.91, stdev=7807.32, samples=11 00:08:08.600 iops : min= 3884, max=10350, avg=6820.73, stdev=1951.83, samples=11 00:08:08.600 write: IOPS=7601, BW=29.7MiB/s (31.1MB/s)(154MiB/5185msec); 0 zone resets 00:08:08.600 slat (usec): min=11, max=3429, avg=51.12, stdev=124.68 00:08:08.600 clat (usec): min=454, max=15535, avg=5730.33, stdev=1451.77 00:08:08.600 lat (usec): min=486, max=15556, avg=5781.45, stdev=1462.34 00:08:08.600 clat percentiles (usec): 00:08:08.600 | 1.00th=[ 2507], 5.00th=[ 3326], 10.00th=[ 3752], 20.00th=[ 4293], 00:08:08.600 | 30.00th=[ 4948], 40.00th=[ 5669], 50.00th=[ 5997], 60.00th=[ 6325], 00:08:08.600 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7177], 95.00th=[ 7635], 00:08:08.600 | 99.00th=[ 9503], 99.50th=[10290], 99.90th=[11863], 99.95th=[12256], 00:08:08.600 | 99.99th=[13829] 00:08:08.600 bw ( KiB/s): min=16384, max=40216, per=89.60%, avg=27245.09, stdev=7489.30, samples=11 00:08:08.600 iops : min= 4096, max=10054, avg=6811.27, stdev=1872.32, samples=11 00:08:08.600 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:08:08.600 lat (msec) : 2=0.22%, 4=6.01%, 10=91.67%, 20=2.07%, 50=0.01% 00:08:08.600 cpu : usr=5.96%, sys=25.08%, ctx=7937, majf=0, minf=114 00:08:08.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:08.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:08.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:08.600 issued rwts: total=77570,39416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:08.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:08.600 00:08:08.600 Run status group 0 (all jobs): 00:08:08.600 READ: bw=50.5MiB/s (52.9MB/s), 50.5MiB/s-50.5MiB/s (52.9MB/s-52.9MB/s), io=303MiB (318MB), run=6005-6005msec 00:08:08.600 WRITE: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=154MiB (161MB), run=5185-5185msec 00:08:08.600 00:08:08.600 Disk stats (read/write): 00:08:08.600 nvme0n1: ios=76920/38398, merge=0/0, ticks=492427/200425, in_queue=692852, util=98.60% 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.600 rmmod nvme_tcp 00:08:08.600 rmmod nvme_fabrics 00:08:08.600 rmmod nvme_keyring 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 69869 ']' 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 69869 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 69869 ']' 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 69869 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69869 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.600 killing process with pid 69869 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69869' 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 69869 00:08:08.600 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 69869 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:08.869 00:08:08.869 real 0m20.603s 00:08:08.869 user 1m20.004s 00:08:08.869 sys 0m7.743s 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.869 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:08.869 ************************************ 00:08:08.869 END TEST nvmf_target_multipath 00:08:08.869 ************************************ 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.128 ************************************ 00:08:09.128 START TEST nvmf_zcopy 00:08:09.128 ************************************ 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:09.128 * Looking for test storage... 00:08:09.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.128 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.129 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:09.129 Cannot find device "nvmf_tgt_br" 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.129 Cannot find device "nvmf_tgt_br2" 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:09.129 Cannot find device "nvmf_tgt_br" 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:09.129 Cannot find device "nvmf_tgt_br2" 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:08:09.129 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.388 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.647 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.647 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:09.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:08:09.648 00:08:09.648 --- 10.0.0.2 ping statistics --- 00:08:09.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.648 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:09.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:08:09.648 00:08:09.648 --- 10.0.0.3 ping statistics --- 00:08:09.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.648 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:09.648 00:08:09.648 --- 10.0.0.1 ping statistics --- 00:08:09.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.648 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=70477 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 70477 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 70477 ']' 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.648 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 [2024-07-24 17:55:16.483727] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:08:09.648 [2024-07-24 17:55:16.483847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.907 [2024-07-24 17:55:16.623417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.907 [2024-07-24 17:55:16.748405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.907 [2024-07-24 17:55:16.748462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.907 [2024-07-24 17:55:16.748477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.907 [2024-07-24 17:55:16.748490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.907 [2024-07-24 17:55:16.748500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.907 [2024-07-24 17:55:16.748540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.907 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.907 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:09.907 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.907 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.907 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.166 [2024-07-24 17:55:16.937659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.166 [2024-07-24 17:55:16.953749] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:10.166 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.167 malloc0 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:10.167 { 00:08:10.167 "params": { 00:08:10.167 "name": "Nvme$subsystem", 00:08:10.167 "trtype": "$TEST_TRANSPORT", 00:08:10.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.167 "adrfam": "ipv4", 00:08:10.167 "trsvcid": "$NVMF_PORT", 00:08:10.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.167 "hdgst": ${hdgst:-false}, 00:08:10.167 "ddgst": ${ddgst:-false} 00:08:10.167 }, 00:08:10.167 "method": "bdev_nvme_attach_controller" 00:08:10.167 } 00:08:10.167 EOF 00:08:10.167 )") 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:10.167 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:10.167 17:55:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:10.167 17:55:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:10.167 "params": { 00:08:10.167 "name": "Nvme1", 00:08:10.167 "trtype": "tcp", 00:08:10.167 "traddr": "10.0.0.2", 00:08:10.167 "adrfam": "ipv4", 00:08:10.167 "trsvcid": "4420", 00:08:10.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.167 "hdgst": false, 00:08:10.167 "ddgst": false 00:08:10.167 }, 00:08:10.167 "method": "bdev_nvme_attach_controller" 00:08:10.167 }' 00:08:10.167 [2024-07-24 17:55:17.045199] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:08:10.167 [2024-07-24 17:55:17.045307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70509 ] 00:08:10.425 [2024-07-24 17:55:17.191720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.425 [2024-07-24 17:55:17.297584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.683 Running I/O for 10 seconds... 00:08:20.698 00:08:20.698 Latency(us) 00:08:20.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.698 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:20.698 Verification LBA range: start 0x0 length 0x1000 00:08:20.698 Nvme1n1 : 10.01 7337.09 57.32 0.00 0.00 17393.93 1973.88 28211.69 00:08:20.698 =================================================================================================================== 00:08:20.698 Total : 7337.09 57.32 0.00 0.00 17393.93 1973.88 28211.69 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70631 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:20.959 { 00:08:20.959 "params": { 00:08:20.959 "name": "Nvme$subsystem", 00:08:20.959 "trtype": "$TEST_TRANSPORT", 00:08:20.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.959 "adrfam": "ipv4", 00:08:20.959 "trsvcid": "$NVMF_PORT", 00:08:20.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.959 "hdgst": ${hdgst:-false}, 00:08:20.959 "ddgst": ${ddgst:-false} 00:08:20.959 }, 00:08:20.959 "method": "bdev_nvme_attach_controller" 00:08:20.959 } 00:08:20.959 EOF 00:08:20.959 )") 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:20.959 [2024-07-24 17:55:27.681946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.681982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:20.959 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:20.959 "params": { 00:08:20.959 "name": "Nvme1", 00:08:20.959 "trtype": "tcp", 00:08:20.959 "traddr": "10.0.0.2", 00:08:20.959 "adrfam": "ipv4", 00:08:20.959 "trsvcid": "4420", 00:08:20.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:20.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:20.959 "hdgst": false, 00:08:20.959 "ddgst": false 00:08:20.959 }, 00:08:20.959 "method": "bdev_nvme_attach_controller" 00:08:20.959 }' 00:08:20.959 [2024-07-24 17:55:27.693917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.693940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.705914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.705936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.717013] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:08:20.959 [2024-07-24 17:55:27.717084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70631 ] 00:08:20.959 [2024-07-24 17:55:27.717915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.717930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.729921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.729943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.741926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.741950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.753925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.753944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.765928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.765948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.777928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.777947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.789933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.789956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.801955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.801976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.813939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.813958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.825946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.825966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.837950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.837972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.849954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.849975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.861163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.959 [2024-07-24 17:55:27.861958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.861976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.959 [2024-07-24 17:55:27.873965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.959 [2024-07-24 17:55:27.873994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.959 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.960 [2024-07-24 17:55:27.885964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.960 [2024-07-24 17:55:27.885985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.960 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.960 [2024-07-24 17:55:27.897979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.960 [2024-07-24 17:55:27.898003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.960 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.960 [2024-07-24 17:55:27.909988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.960 [2024-07-24 17:55:27.910012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.960 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.960 [2024-07-24 17:55:27.921987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.960 [2024-07-24 17:55:27.922013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.960 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.960 [2024-07-24 17:55:27.933994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.960 [2024-07-24 17:55:27.934016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:27.945987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:27.946006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:27.957991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:27.958011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:27.970004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:27.970023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:27.979504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.219 [2024-07-24 17:55:27.982019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:27.982053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:27.994020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:27.994045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.006029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.006054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.018030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.018055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.030029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.030052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.042034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.042056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.058052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.058076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.070042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.070062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.082086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.082118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.094075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.094105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.106093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.106132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.118104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.118142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.130090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.130117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 [2024-07-24 17:55:28.142105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.219 [2024-07-24 17:55:28.142144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.219 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.219 Running I/O for 5 seconds... 00:08:21.220 [2024-07-24 17:55:28.156769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.220 [2024-07-24 17:55:28.156812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.220 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.220 [2024-07-24 17:55:28.173647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.220 [2024-07-24 17:55:28.173697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.220 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.220 [2024-07-24 17:55:28.189225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.220 [2024-07-24 17:55:28.189287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.220 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.477 [2024-07-24 17:55:28.206662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.206713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.222590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.222638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.243270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.243336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.260279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.260329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.276955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.277025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.292690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.292741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.304275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.304327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.320437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.320488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.337393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.337435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.353808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.353851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.370058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.370111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.386357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.386402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.398374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.398426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.414756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.414795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.431822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.431860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.478 [2024-07-24 17:55:28.448398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.478 [2024-07-24 17:55:28.448434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.478 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.737 [2024-07-24 17:55:28.464659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.737 [2024-07-24 17:55:28.464710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.737 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.737 [2024-07-24 17:55:28.481597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.737 [2024-07-24 17:55:28.481653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.737 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.737 [2024-07-24 17:55:28.498159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.737 [2024-07-24 17:55:28.498215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.737 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.737 [2024-07-24 17:55:28.515766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.737 [2024-07-24 17:55:28.515818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.737 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.737 [2024-07-24 17:55:28.531546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.737 [2024-07-24 17:55:28.531593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.737 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.737 [2024-07-24 17:55:28.548365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.737 [2024-07-24 17:55:28.548418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.565093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.565145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.582322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.582371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.599719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.599765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.615571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.615617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.632793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.632841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.648904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.648953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.666099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.666149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.683134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.683194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.738 [2024-07-24 17:55:28.698190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.738 [2024-07-24 17:55:28.698250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.738 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.714006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.714050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.730701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.730748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.746692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.746730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.763448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.763516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.780083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.780124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.796615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.796650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.813755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.813792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.830633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.830679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.847291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.847331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.864168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.864214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.879060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.879101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.896701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.896738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.911149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.911191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.926660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.926690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.943056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.943088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:21.997 [2024-07-24 17:55:28.960558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.997 [2024-07-24 17:55:28.960604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.997 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:28.977374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:28.977423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:28.994270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:28.994311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.011532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.011573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.027196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.027238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.044179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.044221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.060606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.060642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.077124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.077164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.093246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.093291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.104917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.104951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.119096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.119131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.134875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.134910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.151387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.151421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.167533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.167568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.184148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.184192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.200741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.200777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.212504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.212538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.258 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.258 [2024-07-24 17:55:29.228488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.258 [2024-07-24 17:55:29.228528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.519 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.519 [2024-07-24 17:55:29.244649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.519 [2024-07-24 17:55:29.244689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.519 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.519 [2024-07-24 17:55:29.256383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.519 [2024-07-24 17:55:29.256418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.519 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.519 [2024-07-24 17:55:29.272198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.519 [2024-07-24 17:55:29.272232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.519 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.519 [2024-07-24 17:55:29.288519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.519 [2024-07-24 17:55:29.288557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.519 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.519 [2024-07-24 17:55:29.299910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.519 [2024-07-24 17:55:29.299941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.519 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.519 [2024-07-24 17:55:29.315218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.519 [2024-07-24 17:55:29.315259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.519 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.330815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.330846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.346044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.346077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.362005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.362037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.376580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.376611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.388004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.388034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.403101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.403131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.417943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.417974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.431925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.431955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.447018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.447052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.458892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.458925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.475343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.475379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.520 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.520 [2024-07-24 17:55:29.492315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.520 [2024-07-24 17:55:29.492354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.508942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.508977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.525317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.525350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.541491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.541524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.557936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.557983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.573769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.573804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.589056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.589088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.604648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.604685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.618910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.618950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.799 [2024-07-24 17:55:29.632929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.799 [2024-07-24 17:55:29.632975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.799 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.800 [2024-07-24 17:55:29.648874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.800 [2024-07-24 17:55:29.648917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.800 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.800 [2024-07-24 17:55:29.664734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.800 [2024-07-24 17:55:29.664773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.800 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.800 [2024-07-24 17:55:29.683714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.800 [2024-07-24 17:55:29.683752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.800 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.800 [2024-07-24 17:55:29.698584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.800 [2024-07-24 17:55:29.698617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.800 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.800 [2024-07-24 17:55:29.715347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.800 [2024-07-24 17:55:29.715383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.800 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.800 [2024-07-24 17:55:29.730813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.800 [2024-07-24 17:55:29.730847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.800 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.800 [2024-07-24 17:55:29.744964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.800 [2024-07-24 17:55:29.744996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.800 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:22.800 [2024-07-24 17:55:29.760367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.800 [2024-07-24 17:55:29.760400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.800 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.058 [2024-07-24 17:55:29.777034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.058 [2024-07-24 17:55:29.777080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.058 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.058 [2024-07-24 17:55:29.792796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.058 [2024-07-24 17:55:29.792833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.058 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.058 [2024-07-24 17:55:29.806972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.058 [2024-07-24 17:55:29.807008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.058 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.058 [2024-07-24 17:55:29.821953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.058 [2024-07-24 17:55:29.821985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.058 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.058 [2024-07-24 17:55:29.837907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.058 [2024-07-24 17:55:29.837940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.058 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.058 [2024-07-24 17:55:29.852449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.852479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.867194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.867226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.883191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.883224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.894179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.894209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.910166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.910205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.926879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.926917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.943358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.943397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.960010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.960052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.976193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.976229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:29.990507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:29.990537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:30.005549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:30.005579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.059 [2024-07-24 17:55:30.022171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.059 [2024-07-24 17:55:30.022203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.059 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.038827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.038861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.055953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.055993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.072749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.072790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.089229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.089281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.106186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.106232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.122377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.122420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.139406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.139443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.156267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.156317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.172997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.173036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.189732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.189784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.206600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.206640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.224116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.224164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.239679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.239725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.251348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.251385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.266910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.266947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.318 [2024-07-24 17:55:30.282627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.318 [2024-07-24 17:55:30.282663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.318 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.296990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.297026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.311476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.311520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.326641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.326688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.343553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.343605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.359949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.359997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.377313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.377353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.393402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.393440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.410531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.410572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.426615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.426654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.443197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.443267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.459739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.459774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.475717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.475753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.489370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.489403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.504674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.504713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.521118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.521148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.578 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.578 [2024-07-24 17:55:30.536803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.578 [2024-07-24 17:55:30.536837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.579 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.579 [2024-07-24 17:55:30.548362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.579 [2024-07-24 17:55:30.548392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.579 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.563313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.563351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.574580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.574625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.590792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.590829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.606735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.606772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.618847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.618887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.634697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.634740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.651974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.652018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.667691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.667747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.685729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.685773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.700970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.701010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.716165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.716206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.728956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.837 [2024-07-24 17:55:30.728995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.837 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.837 [2024-07-24 17:55:30.744420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.838 [2024-07-24 17:55:30.744461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.838 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.838 [2024-07-24 17:55:30.755527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.838 [2024-07-24 17:55:30.755565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.838 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.838 [2024-07-24 17:55:30.773046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.838 [2024-07-24 17:55:30.773085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.838 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.838 [2024-07-24 17:55:30.788112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.838 [2024-07-24 17:55:30.788154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.838 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:23.838 [2024-07-24 17:55:30.803793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.838 [2024-07-24 17:55:30.803835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.838 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.096 [2024-07-24 17:55:30.820640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.096 [2024-07-24 17:55:30.820678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.096 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.096 [2024-07-24 17:55:30.836883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.096 [2024-07-24 17:55:30.836920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.096 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.096 [2024-07-24 17:55:30.852964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.096 [2024-07-24 17:55:30.853002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.096 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.096 [2024-07-24 17:55:30.866465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.096 [2024-07-24 17:55:30.866502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.096 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.096 [2024-07-24 17:55:30.882911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.096 [2024-07-24 17:55:30.882958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.096 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.096 [2024-07-24 17:55:30.899005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.096 [2024-07-24 17:55:30.899044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.096 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.096 [2024-07-24 17:55:30.915506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.096 [2024-07-24 17:55:30.915544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.096 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.096 [2024-07-24 17:55:30.931945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.096 [2024-07-24 17:55:30.931983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.096 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:30.944216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:30.944265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.097 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:30.960378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:30.960419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.097 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:30.976978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:30.977025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.097 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:30.992992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:30.993029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.097 2024/07/24 17:55:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:31.007223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:31.007264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.097 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:31.022984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:31.023014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.097 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:31.038828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:31.038858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.097 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:31.053538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:31.053570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.097 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.097 [2024-07-24 17:55:31.069991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.097 [2024-07-24 17:55:31.070029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.086167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.086198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.101463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.101495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.117256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.117296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.132182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.132218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.147559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.147590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.162164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.162197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.174288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.174320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.189871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.189911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.206007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.206041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.217989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.218020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.234074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.234107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.250811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.250847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.267393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.267439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.283714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.283756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.300376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.300408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.315855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.315887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.356 [2024-07-24 17:55:31.330349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.356 [2024-07-24 17:55:31.330378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.356 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.341745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.341773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.357494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.357521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.373136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.373165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.387501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.387530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.402079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.402108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.413396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.413436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.430008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.430037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.445000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.445041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.459733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.459762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.616 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.616 [2024-07-24 17:55:31.470958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.616 [2024-07-24 17:55:31.470989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.617 [2024-07-24 17:55:31.487284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.617 [2024-07-24 17:55:31.487325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.617 [2024-07-24 17:55:31.502143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.617 [2024-07-24 17:55:31.502174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.617 [2024-07-24 17:55:31.517093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.617 [2024-07-24 17:55:31.517122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.617 [2024-07-24 17:55:31.533077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.617 [2024-07-24 17:55:31.533106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.617 [2024-07-24 17:55:31.544131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.617 [2024-07-24 17:55:31.544160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.617 [2024-07-24 17:55:31.559639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.617 [2024-07-24 17:55:31.559670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.617 [2024-07-24 17:55:31.574570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.617 [2024-07-24 17:55:31.574598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.617 [2024-07-24 17:55:31.589799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.617 [2024-07-24 17:55:31.589829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.617 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.886 [2024-07-24 17:55:31.600906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.886 [2024-07-24 17:55:31.600943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.886 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.886 [2024-07-24 17:55:31.616092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.886 [2024-07-24 17:55:31.616122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.886 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.886 [2024-07-24 17:55:31.631676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.886 [2024-07-24 17:55:31.631704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.886 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.886 [2024-07-24 17:55:31.646649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.886 [2024-07-24 17:55:31.646677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.886 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.886 [2024-07-24 17:55:31.661917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.886 [2024-07-24 17:55:31.661943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.886 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.886 [2024-07-24 17:55:31.677275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.886 [2024-07-24 17:55:31.677305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.886 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.886 [2024-07-24 17:55:31.693170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.886 [2024-07-24 17:55:31.693204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.886 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.886 [2024-07-24 17:55:31.707921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.886 [2024-07-24 17:55:31.707968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.886 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.719635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.719673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.734693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.734734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.746330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.746369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.762874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.762907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.778713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.778746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.795685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.795719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.812319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.812369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.828626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.828666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:24.887 [2024-07-24 17:55:31.845512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.887 [2024-07-24 17:55:31.845545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.887 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.178 [2024-07-24 17:55:31.861840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.178 [2024-07-24 17:55:31.861870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.178 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.178 [2024-07-24 17:55:31.877747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.178 [2024-07-24 17:55:31.877785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:31.888804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:31.888836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:31.904774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:31.904804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:31.920681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:31.920712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:31.936675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:31.936709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:31.951458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:31.951492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:31.963883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:31.963920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:31.979681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:31.979717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:31.996523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:31.996560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.013040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.013077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.028791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.028827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.043471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.043509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.059490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.059526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.070995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.071029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.087041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.087079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.103529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.103568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.120209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.120257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.179 [2024-07-24 17:55:32.137587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.179 [2024-07-24 17:55:32.137637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.179 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.152564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.152603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.169606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.169647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.185040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.185078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.199552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.199595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.214579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.214617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.230895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.230934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.242742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.242781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.258816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.258858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.275236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.275292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.292113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.292158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.308270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.443 [2024-07-24 17:55:32.308311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.443 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.443 [2024-07-24 17:55:32.324715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.444 [2024-07-24 17:55:32.324754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.444 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.444 [2024-07-24 17:55:32.340756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.444 [2024-07-24 17:55:32.340793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.444 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.444 [2024-07-24 17:55:32.352148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.444 [2024-07-24 17:55:32.352185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.444 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.444 [2024-07-24 17:55:32.368040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.444 [2024-07-24 17:55:32.368082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.444 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.444 [2024-07-24 17:55:32.384180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.444 [2024-07-24 17:55:32.384227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.444 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.444 [2024-07-24 17:55:32.395731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.444 [2024-07-24 17:55:32.395789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.444 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.444 [2024-07-24 17:55:32.410204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.444 [2024-07-24 17:55:32.410261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.444 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.425388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.425439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.441814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.441855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.457516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.457551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.472541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.472575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.488175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.488215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.502989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.503040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.519492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.519532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.532038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.532073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.543048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.543078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.558153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.558186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.573750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.573784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.587370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.587401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.602787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.602819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.618595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.618637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.633860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.633902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.704 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.704 [2024-07-24 17:55:32.653423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.704 [2024-07-24 17:55:32.653464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.705 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.705 [2024-07-24 17:55:32.668700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.705 [2024-07-24 17:55:32.668731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.705 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.685007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.685049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.699944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.699988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.715293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.715346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.731802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.731838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.747133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.747175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.761165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.761198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.776928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.776959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.792944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.792975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.804042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.804071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.819206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.819236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.830424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.830464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.845605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.845644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.861739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.861772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.876313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.876343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.891313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.891346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.907319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.907347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.918526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.918555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:25.964 [2024-07-24 17:55:32.934004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.964 [2024-07-24 17:55:32.934038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.964 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:32.949036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:32.949076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:32.963890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:32.963930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:32.980319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:32.980361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:32.997039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:32.997077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:33.012826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:33.012865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:33.027378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:33.027435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:33.038758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:33.038798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:33.054132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:33.054169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:33.065179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:33.065229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.224 [2024-07-24 17:55:33.080555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.224 [2024-07-24 17:55:33.080604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.224 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 [2024-07-24 17:55:33.096603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.096653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.225 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 [2024-07-24 17:55:33.111062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.111113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.225 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 [2024-07-24 17:55:33.122939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.122989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.225 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 [2024-07-24 17:55:33.138464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.138512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.225 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 [2024-07-24 17:55:33.153588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.153637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.225 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 00:08:26.225 Latency(us) 00:08:26.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.225 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:26.225 Nvme1n1 : 5.01 14402.23 112.52 0.00 0.00 8877.67 3900.95 16976.94 00:08:26.225 =================================================================================================================== 00:08:26.225 Total : 14402.23 112.52 0.00 0.00 8877.67 3900.95 16976.94 00:08:26.225 [2024-07-24 17:55:33.162661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.162702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.225 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 [2024-07-24 17:55:33.174666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.174704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.225 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 [2024-07-24 17:55:33.186645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.186676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.225 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.225 [2024-07-24 17:55:33.198652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.225 [2024-07-24 17:55:33.198683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.210674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.210708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.222664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.222698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.234665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.234701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.246685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.246722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.258678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.258716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.270672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.270703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.282676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.282711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.294672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.294704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.306677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.306705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.318699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.318738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.330700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.330731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.342693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.342720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 [2024-07-24 17:55:33.354765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.484 [2024-07-24 17:55:33.354810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.484 2024/07/24 17:55:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:26.484 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70631) - No such process 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70631 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.484 delay0 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.484 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:26.742 [2024-07-24 17:55:33.552305] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:33.309 Initializing NVMe Controllers 00:08:33.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:33.309 Initialization complete. Launching workers. 00:08:33.309 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 65 00:08:33.309 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 352, failed to submit 33 00:08:33.309 success 153, unsuccess 199, failed 0 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.309 rmmod nvme_tcp 00:08:33.309 rmmod nvme_fabrics 00:08:33.309 rmmod nvme_keyring 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 70477 ']' 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 70477 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 70477 ']' 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 70477 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70477 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:33.309 killing process with pid 70477 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70477' 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 70477 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 70477 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:33.309 00:08:33.309 real 0m24.107s 00:08:33.309 user 0m38.911s 00:08:33.309 sys 0m7.494s 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.309 ************************************ 00:08:33.309 END TEST nvmf_zcopy 00:08:33.309 ************************************ 00:08:33.309 17:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.309 ************************************ 00:08:33.309 START TEST nvmf_nmic 00:08:33.309 ************************************ 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:33.309 * Looking for test storage... 00:08:33.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.309 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:33.310 Cannot find device "nvmf_tgt_br" 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.310 Cannot find device "nvmf_tgt_br2" 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:33.310 Cannot find device "nvmf_tgt_br" 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:33.310 Cannot find device "nvmf_tgt_br2" 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:08:33.310 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:33.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:33.568 00:08:33.568 --- 10.0.0.2 ping statistics --- 00:08:33.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.568 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:33.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:33.568 00:08:33.568 --- 10.0.0.3 ping statistics --- 00:08:33.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.568 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:33.568 00:08:33.568 --- 10.0.0.1 ping statistics --- 00:08:33.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.568 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:33.568 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=70950 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 70950 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 70950 ']' 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.887 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:33.887 [2024-07-24 17:55:40.618087] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:08:33.887 [2024-07-24 17:55:40.618196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.887 [2024-07-24 17:55:40.761021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.145 [2024-07-24 17:55:40.880370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.145 [2024-07-24 17:55:40.880654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.145 [2024-07-24 17:55:40.880803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.145 [2024-07-24 17:55:40.880876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.145 [2024-07-24 17:55:40.880976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.145 [2024-07-24 17:55:40.881230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.145 [2024-07-24 17:55:40.881636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.145 [2024-07-24 17:55:40.881642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.145 [2024-07-24 17:55:40.881790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.711 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.711 [2024-07-24 17:55:41.667228] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.968 Malloc0 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.968 [2024-07-24 17:55:41.742612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.968 test case1: single bdev can't be used in multiple subsystems 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:34.968 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.969 [2024-07-24 17:55:41.766480] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:34.969 [2024-07-24 17:55:41.766518] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:34.969 [2024-07-24 17:55:41.766530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.969 2024/07/24 17:55:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:34.969 request: 00:08:34.969 { 00:08:34.969 "method": "nvmf_subsystem_add_ns", 00:08:34.969 "params": { 00:08:34.969 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:34.969 "namespace": { 00:08:34.969 "bdev_name": "Malloc0", 00:08:34.969 "no_auto_visible": false 00:08:34.969 } 00:08:34.969 } 00:08:34.969 } 00:08:34.969 Got JSON-RPC error response 00:08:34.969 GoRPCClient: error on JSON-RPC call 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:34.969 Adding namespace failed - expected result. 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:34.969 test case2: host connect to nvmf target in multiple paths 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.969 [2024-07-24 17:55:41.778617] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.969 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.226 17:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:35.226 17:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.226 17:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.226 17:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.226 17:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.226 17:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.752 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.752 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.752 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.752 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.752 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.752 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:37.752 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:37.752 [global] 00:08:37.752 thread=1 00:08:37.752 invalidate=1 00:08:37.752 rw=write 00:08:37.752 time_based=1 00:08:37.752 runtime=1 00:08:37.752 ioengine=libaio 00:08:37.752 direct=1 00:08:37.752 bs=4096 00:08:37.752 iodepth=1 00:08:37.752 norandommap=0 00:08:37.752 numjobs=1 00:08:37.752 00:08:37.752 verify_dump=1 00:08:37.752 verify_backlog=512 00:08:37.752 verify_state_save=0 00:08:37.752 do_verify=1 00:08:37.752 verify=crc32c-intel 00:08:37.752 [job0] 00:08:37.752 filename=/dev/nvme0n1 00:08:37.752 Could not set queue depth (nvme0n1) 00:08:37.752 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:37.752 fio-3.35 00:08:37.752 Starting 1 thread 00:08:38.686 00:08:38.686 job0: (groupid=0, jobs=1): err= 0: pid=71065: Wed Jul 24 17:55:45 2024 00:08:38.686 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:38.686 slat (nsec): min=8792, max=49419, avg=12857.66, stdev=2933.68 00:08:38.686 clat (usec): min=104, max=2875, avg=135.73, stdev=59.81 00:08:38.686 lat (usec): min=115, max=2893, avg=148.59, stdev=60.44 00:08:38.686 clat percentiles (usec): 00:08:38.686 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:08:38.686 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:08:38.686 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:08:38.686 | 99.00th=[ 174], 99.50th=[ 215], 99.90th=[ 627], 99.95th=[ 1991], 00:08:38.686 | 99.99th=[ 2868] 00:08:38.686 write: IOPS=3988, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1001msec); 0 zone resets 00:08:38.686 slat (usec): min=14, max=225, avg=19.64, stdev= 6.84 00:08:38.686 clat (usec): min=73, max=625, avg=95.02, stdev=15.18 00:08:38.686 lat (usec): min=87, max=641, avg=114.66, stdev=17.93 00:08:38.686 clat percentiles (usec): 00:08:38.686 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:08:38.686 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:08:38.686 | 70.00th=[ 98], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 112], 00:08:38.686 | 99.00th=[ 125], 99.50th=[ 133], 99.90th=[ 289], 99.95th=[ 424], 00:08:38.686 | 99.99th=[ 627] 00:08:38.686 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:08:38.686 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:38.686 lat (usec) : 100=39.97%, 250=59.79%, 500=0.15%, 750=0.05%, 1000=0.01% 00:08:38.686 lat (msec) : 2=0.01%, 4=0.01% 00:08:38.686 cpu : usr=2.10%, sys=9.60%, ctx=7576, majf=0, minf=2 00:08:38.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.686 issued rwts: total=3584,3992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.686 00:08:38.686 Run status group 0 (all jobs): 00:08:38.686 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:08:38.686 WRITE: bw=15.6MiB/s (16.3MB/s), 15.6MiB/s-15.6MiB/s (16.3MB/s-16.3MB/s), io=15.6MiB (16.4MB), run=1001-1001msec 00:08:38.686 00:08:38.686 Disk stats (read/write): 00:08:38.686 nvme0n1: ios=3307/3584, merge=0/0, ticks=473/375, in_queue=848, util=91.18% 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:38.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.686 rmmod nvme_tcp 00:08:38.686 rmmod nvme_fabrics 00:08:38.686 rmmod nvme_keyring 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 70950 ']' 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 70950 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 70950 ']' 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 70950 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.686 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70950 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.951 killing process with pid 70950 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70950' 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 70950 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 70950 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.951 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.227 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:39.227 00:08:39.227 real 0m5.904s 00:08:39.227 user 0m19.441s 00:08:39.227 sys 0m1.633s 00:08:39.227 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.227 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:39.227 ************************************ 00:08:39.227 END TEST nvmf_nmic 00:08:39.227 ************************************ 00:08:39.227 17:55:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:39.227 17:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.227 17:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.227 17:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.227 ************************************ 00:08:39.227 START TEST nvmf_fio_target 00:08:39.227 ************************************ 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:39.227 * Looking for test storage... 00:08:39.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.227 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:39.228 Cannot find device "nvmf_tgt_br" 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.228 Cannot find device "nvmf_tgt_br2" 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:39.228 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:39.486 Cannot find device "nvmf_tgt_br" 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:39.486 Cannot find device "nvmf_tgt_br2" 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:39.486 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:39.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:08:39.744 00:08:39.744 --- 10.0.0.2 ping statistics --- 00:08:39.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.744 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:39.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:39.744 00:08:39.744 --- 10.0.0.3 ping statistics --- 00:08:39.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.744 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:39.744 00:08:39.744 --- 10.0.0.1 ping statistics --- 00:08:39.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.744 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=71245 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 71245 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 71245 ']' 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.744 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:39.744 [2024-07-24 17:55:46.612210] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:08:39.744 [2024-07-24 17:55:46.612341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.003 [2024-07-24 17:55:46.759313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.003 [2024-07-24 17:55:46.869033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.003 [2024-07-24 17:55:46.869087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.003 [2024-07-24 17:55:46.869099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.003 [2024-07-24 17:55:46.869109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.003 [2024-07-24 17:55:46.869118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.003 [2024-07-24 17:55:46.869274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.003 [2024-07-24 17:55:46.869380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.003 [2024-07-24 17:55:46.870684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.003 [2024-07-24 17:55:46.870685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.261 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.261 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:08:40.261 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.261 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.261 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:40.261 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.261 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.519 [2024-07-24 17:55:47.290013] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.519 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.087 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:41.087 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.348 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:41.348 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.612 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:41.612 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.881 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:41.881 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:42.152 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.425 17:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:42.425 17:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.699 17:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:42.699 17:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.281 17:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:43.281 17:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:43.281 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.539 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:43.539 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.797 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:43.797 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.055 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.313 [2024-07-24 17:55:51.128624] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.313 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:44.571 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:44.829 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.087 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:45.087 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.087 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.087 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:08:45.087 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:08:45.087 17:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.016 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.016 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.016 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.016 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:08:47.016 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.016 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:08:47.016 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:47.016 [global] 00:08:47.016 thread=1 00:08:47.016 invalidate=1 00:08:47.016 rw=write 00:08:47.016 time_based=1 00:08:47.016 runtime=1 00:08:47.016 ioengine=libaio 00:08:47.016 direct=1 00:08:47.016 bs=4096 00:08:47.016 iodepth=1 00:08:47.016 norandommap=0 00:08:47.016 numjobs=1 00:08:47.016 00:08:47.016 verify_dump=1 00:08:47.016 verify_backlog=512 00:08:47.016 verify_state_save=0 00:08:47.016 do_verify=1 00:08:47.016 verify=crc32c-intel 00:08:47.016 [job0] 00:08:47.016 filename=/dev/nvme0n1 00:08:47.016 [job1] 00:08:47.016 filename=/dev/nvme0n2 00:08:47.016 [job2] 00:08:47.016 filename=/dev/nvme0n3 00:08:47.016 [job3] 00:08:47.016 filename=/dev/nvme0n4 00:08:47.274 Could not set queue depth (nvme0n1) 00:08:47.274 Could not set queue depth (nvme0n2) 00:08:47.274 Could not set queue depth (nvme0n3) 00:08:47.274 Could not set queue depth (nvme0n4) 00:08:47.274 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.274 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.274 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.274 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.274 fio-3.35 00:08:47.274 Starting 4 threads 00:08:48.646 00:08:48.646 job0: (groupid=0, jobs=1): err= 0: pid=71535: Wed Jul 24 17:55:55 2024 00:08:48.646 read: IOPS=1820, BW=7281KiB/s (7455kB/s)(7288KiB/1001msec) 00:08:48.646 slat (nsec): min=12981, max=39641, avg=14553.86, stdev=2322.42 00:08:48.646 clat (usec): min=131, max=2284, avg=265.30, stdev=89.87 00:08:48.646 lat (usec): min=145, max=2298, avg=279.86, stdev=89.75 00:08:48.646 clat percentiles (usec): 00:08:48.646 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 163], 00:08:48.646 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:08:48.646 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 359], 00:08:48.646 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 750], 99.95th=[ 2278], 00:08:48.646 | 99.99th=[ 2278] 00:08:48.646 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:48.646 slat (usec): min=14, max=154, avg=25.69, stdev= 7.79 00:08:48.646 clat (usec): min=85, max=3598, avg=210.56, stdev=118.30 00:08:48.646 lat (usec): min=106, max=3628, avg=236.25, stdev=120.62 00:08:48.646 clat percentiles (usec): 00:08:48.647 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 118], 00:08:48.647 | 30.00th=[ 186], 40.00th=[ 198], 50.00th=[ 229], 60.00th=[ 243], 00:08:48.647 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 293], 00:08:48.647 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 848], 99.95th=[ 3130], 00:08:48.647 | 99.99th=[ 3589] 00:08:48.647 bw ( KiB/s): min= 8192, max= 8192, per=25.44%, avg=8192.00, stdev= 0.00, samples=1 00:08:48.647 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:48.647 lat (usec) : 100=1.19%, 250=49.61%, 500=48.94%, 750=0.13%, 1000=0.05% 00:08:48.647 lat (msec) : 4=0.08% 00:08:48.647 cpu : usr=1.20%, sys=6.30%, ctx=3871, majf=0, minf=11 00:08:48.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:48.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.647 issued rwts: total=1822,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:48.647 job1: (groupid=0, jobs=1): err= 0: pid=71536: Wed Jul 24 17:55:55 2024 00:08:48.647 read: IOPS=1871, BW=7485KiB/s (7664kB/s)(7492KiB/1001msec) 00:08:48.647 slat (nsec): min=7713, max=45674, avg=13557.56, stdev=3361.71 00:08:48.647 clat (usec): min=142, max=615, avg=290.64, stdev=89.63 00:08:48.647 lat (usec): min=157, max=637, avg=304.20, stdev=90.92 00:08:48.647 clat percentiles (usec): 00:08:48.647 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:08:48.647 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:08:48.647 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 482], 95.00th=[ 545], 00:08:48.647 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 611], 99.95th=[ 619], 00:08:48.647 | 99.99th=[ 619] 00:08:48.647 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:48.647 slat (usec): min=14, max=114, avg=21.09, stdev= 5.95 00:08:48.647 clat (usec): min=86, max=687, avg=186.32, stdev=27.47 00:08:48.647 lat (usec): min=101, max=702, avg=207.41, stdev=27.57 00:08:48.647 clat percentiles (usec): 00:08:48.647 | 1.00th=[ 131], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:08:48.647 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:08:48.647 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 219], 00:08:48.647 | 99.00th=[ 249], 99.50th=[ 277], 99.90th=[ 619], 99.95th=[ 627], 00:08:48.647 | 99.99th=[ 685] 00:08:48.647 bw ( KiB/s): min= 8752, max= 8752, per=27.18%, avg=8752.00, stdev= 0.00, samples=1 00:08:48.647 iops : min= 2188, max= 2188, avg=2188.00, stdev= 0.00, samples=1 00:08:48.647 lat (usec) : 100=0.20%, 250=66.23%, 500=29.51%, 750=4.06% 00:08:48.647 cpu : usr=1.30%, sys=5.30%, ctx=3923, majf=0, minf=7 00:08:48.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:48.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.647 issued rwts: total=1873,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:48.647 job2: (groupid=0, jobs=1): err= 0: pid=71537: Wed Jul 24 17:55:55 2024 00:08:48.647 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:48.647 slat (nsec): min=8531, max=59862, avg=14764.56, stdev=6564.30 00:08:48.647 clat (usec): min=146, max=7719, avg=307.43, stdev=231.40 00:08:48.647 lat (usec): min=159, max=7758, avg=322.19, stdev=231.77 00:08:48.647 clat percentiles (usec): 00:08:48.647 | 1.00th=[ 188], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:08:48.647 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 293], 00:08:48.647 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 359], 95.00th=[ 388], 00:08:48.647 | 99.00th=[ 502], 99.50th=[ 799], 99.90th=[ 3490], 99.95th=[ 7701], 00:08:48.647 | 99.99th=[ 7701] 00:08:48.647 write: IOPS=1868, BW=7473KiB/s (7652kB/s)(7480KiB/1001msec); 0 zone resets 00:08:48.647 slat (usec): min=13, max=156, avg=22.11, stdev= 8.24 00:08:48.647 clat (usec): min=109, max=3058, avg=245.16, stdev=77.80 00:08:48.647 lat (usec): min=128, max=3107, avg=267.27, stdev=80.06 00:08:48.647 clat percentiles (usec): 00:08:48.647 | 1.00th=[ 123], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 202], 00:08:48.647 | 30.00th=[ 225], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:08:48.647 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:08:48.647 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 881], 99.95th=[ 3064], 00:08:48.647 | 99.99th=[ 3064] 00:08:48.647 bw ( KiB/s): min= 8192, max= 8192, per=25.44%, avg=8192.00, stdev= 0.00, samples=1 00:08:48.647 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:48.647 lat (usec) : 250=31.00%, 500=68.35%, 750=0.35%, 1000=0.09% 00:08:48.647 lat (msec) : 2=0.06%, 4=0.12%, 10=0.03% 00:08:48.647 cpu : usr=1.80%, sys=4.40%, ctx=3410, majf=0, minf=8 00:08:48.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:48.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.647 issued rwts: total=1536,1870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:48.647 job3: (groupid=0, jobs=1): err= 0: pid=71538: Wed Jul 24 17:55:55 2024 00:08:48.647 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:48.647 slat (nsec): min=9243, max=60151, avg=14314.28, stdev=4620.98 00:08:48.647 clat (usec): min=142, max=366, avg=257.56, stdev=24.92 00:08:48.647 lat (usec): min=152, max=380, avg=271.87, stdev=24.42 00:08:48.647 clat percentiles (usec): 00:08:48.647 | 1.00th=[ 169], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:08:48.647 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:08:48.647 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:08:48.647 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 367], 00:08:48.647 | 99.99th=[ 367] 00:08:48.647 write: IOPS=2088, BW=8356KiB/s (8556kB/s)(8364KiB/1001msec); 0 zone resets 00:08:48.647 slat (nsec): min=14339, max=75962, avg=21343.70, stdev=5606.90 00:08:48.647 clat (usec): min=103, max=1729, avg=187.78, stdev=41.96 00:08:48.647 lat (usec): min=119, max=1744, avg=209.12, stdev=41.63 00:08:48.647 clat percentiles (usec): 00:08:48.647 | 1.00th=[ 141], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 169], 00:08:48.647 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:08:48.647 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 223], 00:08:48.647 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 453], 99.95th=[ 594], 00:08:48.647 | 99.99th=[ 1729] 00:08:48.647 bw ( KiB/s): min= 8752, max= 8752, per=27.18%, avg=8752.00, stdev= 0.00, samples=1 00:08:48.647 iops : min= 2188, max= 2188, avg=2188.00, stdev= 0.00, samples=1 00:08:48.647 lat (usec) : 250=68.23%, 500=31.72%, 750=0.02% 00:08:48.647 lat (msec) : 2=0.02% 00:08:48.647 cpu : usr=1.70%, sys=5.40%, ctx=4139, majf=0, minf=9 00:08:48.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:48.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.647 issued rwts: total=2048,2091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:48.647 00:08:48.647 Run status group 0 (all jobs): 00:08:48.647 READ: bw=28.4MiB/s (29.8MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.4MiB (29.8MB), run=1001-1001msec 00:08:48.647 WRITE: bw=31.4MiB/s (33.0MB/s), 7473KiB/s-8356KiB/s (7652kB/s-8556kB/s), io=31.5MiB (33.0MB), run=1001-1001msec 00:08:48.647 00:08:48.647 Disk stats (read/write): 00:08:48.647 nvme0n1: ios=1446/1536, merge=0/0, ticks=435/384, in_queue=819, util=86.76% 00:08:48.647 nvme0n2: ios=1576/2048, merge=0/0, ticks=428/407, in_queue=835, util=87.28% 00:08:48.647 nvme0n3: ios=1388/1536, merge=0/0, ticks=413/387, in_queue=800, util=88.27% 00:08:48.647 nvme0n4: ios=1539/2048, merge=0/0, ticks=399/397, in_queue=796, util=89.65% 00:08:48.647 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:48.647 [global] 00:08:48.647 thread=1 00:08:48.647 invalidate=1 00:08:48.647 rw=randwrite 00:08:48.647 time_based=1 00:08:48.647 runtime=1 00:08:48.647 ioengine=libaio 00:08:48.647 direct=1 00:08:48.647 bs=4096 00:08:48.647 iodepth=1 00:08:48.647 norandommap=0 00:08:48.647 numjobs=1 00:08:48.647 00:08:48.647 verify_dump=1 00:08:48.647 verify_backlog=512 00:08:48.647 verify_state_save=0 00:08:48.647 do_verify=1 00:08:48.647 verify=crc32c-intel 00:08:48.647 [job0] 00:08:48.647 filename=/dev/nvme0n1 00:08:48.647 [job1] 00:08:48.647 filename=/dev/nvme0n2 00:08:48.647 [job2] 00:08:48.647 filename=/dev/nvme0n3 00:08:48.647 [job3] 00:08:48.647 filename=/dev/nvme0n4 00:08:48.647 Could not set queue depth (nvme0n1) 00:08:48.647 Could not set queue depth (nvme0n2) 00:08:48.647 Could not set queue depth (nvme0n3) 00:08:48.647 Could not set queue depth (nvme0n4) 00:08:48.647 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.647 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.647 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.647 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.647 fio-3.35 00:08:48.647 Starting 4 threads 00:08:50.045 00:08:50.045 job0: (groupid=0, jobs=1): err= 0: pid=71591: Wed Jul 24 17:55:56 2024 00:08:50.045 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:50.045 slat (nsec): min=7537, max=49675, avg=18511.70, stdev=4828.52 00:08:50.045 clat (usec): min=139, max=3933, avg=239.08, stdev=142.74 00:08:50.045 lat (usec): min=157, max=3960, avg=257.59, stdev=141.33 00:08:50.045 clat percentiles (usec): 00:08:50.045 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:08:50.045 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:08:50.045 | 70.00th=[ 210], 80.00th=[ 355], 90.00th=[ 404], 95.00th=[ 420], 00:08:50.045 | 99.00th=[ 482], 99.50th=[ 545], 99.90th=[ 1352], 99.95th=[ 2999], 00:08:50.045 | 99.99th=[ 3949] 00:08:50.045 write: IOPS=2319, BW=9279KiB/s (9501kB/s)(9288KiB/1001msec); 0 zone resets 00:08:50.045 slat (nsec): min=12564, max=94574, avg=27751.29, stdev=7522.43 00:08:50.045 clat (usec): min=93, max=6447, avg=171.77, stdev=146.48 00:08:50.045 lat (usec): min=124, max=6485, avg=199.52, stdev=145.98 00:08:50.045 clat percentiles (usec): 00:08:50.045 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 123], 20.00th=[ 129], 00:08:50.045 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 153], 00:08:50.045 | 70.00th=[ 172], 80.00th=[ 225], 90.00th=[ 253], 95.00th=[ 273], 00:08:50.045 | 99.00th=[ 318], 99.50th=[ 400], 99.90th=[ 922], 99.95th=[ 1795], 00:08:50.045 | 99.99th=[ 6456] 00:08:50.045 bw ( KiB/s): min=12288, max=12288, per=32.61%, avg=12288.00, stdev= 0.00, samples=1 00:08:50.045 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:50.045 lat (usec) : 100=0.11%, 250=82.22%, 500=17.14%, 750=0.32%, 1000=0.09% 00:08:50.045 lat (msec) : 2=0.05%, 4=0.05%, 10=0.02% 00:08:50.045 cpu : usr=2.30%, sys=7.80%, ctx=4371, majf=0, minf=9 00:08:50.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.045 issued rwts: total=2048,2322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.045 job1: (groupid=0, jobs=1): err= 0: pid=71592: Wed Jul 24 17:55:56 2024 00:08:50.045 read: IOPS=2334, BW=9339KiB/s (9563kB/s)(9348KiB/1001msec) 00:08:50.045 slat (nsec): min=8709, max=59834, avg=14621.90, stdev=4637.84 00:08:50.045 clat (usec): min=129, max=2604, avg=229.43, stdev=135.27 00:08:50.045 lat (usec): min=139, max=2635, avg=244.06, stdev=136.88 00:08:50.045 clat percentiles (usec): 00:08:50.045 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:08:50.045 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:08:50.045 | 70.00th=[ 190], 80.00th=[ 347], 90.00th=[ 474], 95.00th=[ 519], 00:08:50.045 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 938], 99.95th=[ 1123], 00:08:50.045 | 99.99th=[ 2606] 00:08:50.045 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:50.045 slat (nsec): min=10097, max=90556, avg=19691.61, stdev=6200.69 00:08:50.045 clat (usec): min=87, max=4160, avg=145.18, stdev=124.75 00:08:50.045 lat (usec): min=108, max=4212, avg=164.87, stdev=125.50 00:08:50.045 clat percentiles (usec): 00:08:50.045 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 115], 00:08:50.045 | 30.00th=[ 118], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 130], 00:08:50.045 | 70.00th=[ 137], 80.00th=[ 174], 90.00th=[ 196], 95.00th=[ 208], 00:08:50.045 | 99.00th=[ 343], 99.50th=[ 404], 99.90th=[ 2147], 99.95th=[ 3425], 00:08:50.045 | 99.99th=[ 4146] 00:08:50.045 bw ( KiB/s): min=12288, max=12288, per=32.61%, avg=12288.00, stdev= 0.00, samples=1 00:08:50.045 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:50.045 lat (usec) : 100=0.53%, 250=87.16%, 500=8.45%, 750=3.63%, 1000=0.08% 00:08:50.045 lat (msec) : 2=0.06%, 4=0.06%, 10=0.02% 00:08:50.045 cpu : usr=1.90%, sys=6.50%, ctx=4897, majf=0, minf=18 00:08:50.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.045 issued rwts: total=2337,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.045 job2: (groupid=0, jobs=1): err= 0: pid=71593: Wed Jul 24 17:55:56 2024 00:08:50.045 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:50.045 slat (nsec): min=8806, max=53071, avg=15848.54, stdev=5614.69 00:08:50.045 clat (usec): min=142, max=8049, avg=256.34, stdev=257.62 00:08:50.045 lat (usec): min=152, max=8065, avg=272.19, stdev=257.68 00:08:50.045 clat percentiles (usec): 00:08:50.045 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:08:50.045 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 210], 00:08:50.045 | 70.00th=[ 223], 80.00th=[ 363], 90.00th=[ 408], 95.00th=[ 433], 00:08:50.045 | 99.00th=[ 545], 99.50th=[ 644], 99.90th=[ 3359], 99.95th=[ 5800], 00:08:50.045 | 99.99th=[ 8029] 00:08:50.045 write: IOPS=2224, BW=8899KiB/s (9113kB/s)(8908KiB/1001msec); 0 zone resets 00:08:50.045 slat (usec): min=11, max=100, avg=21.87, stdev= 7.55 00:08:50.045 clat (usec): min=94, max=2333, avg=173.67, stdev=68.60 00:08:50.045 lat (usec): min=120, max=2355, avg=195.54, stdev=69.46 00:08:50.045 clat percentiles (usec): 00:08:50.046 | 1.00th=[ 114], 5.00th=[ 121], 10.00th=[ 126], 20.00th=[ 133], 00:08:50.046 | 30.00th=[ 139], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 165], 00:08:50.046 | 70.00th=[ 182], 80.00th=[ 227], 90.00th=[ 249], 95.00th=[ 265], 00:08:50.046 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 562], 99.95th=[ 750], 00:08:50.046 | 99.99th=[ 2343] 00:08:50.046 bw ( KiB/s): min=12288, max=12288, per=32.61%, avg=12288.00, stdev= 0.00, samples=1 00:08:50.046 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:50.046 lat (usec) : 100=0.02%, 250=82.55%, 500=16.49%, 750=0.73% 00:08:50.046 lat (msec) : 2=0.09%, 4=0.07%, 10=0.05% 00:08:50.046 cpu : usr=2.40%, sys=5.70%, ctx=4275, majf=0, minf=9 00:08:50.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.046 issued rwts: total=2048,2227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.046 job3: (groupid=0, jobs=1): err= 0: pid=71594: Wed Jul 24 17:55:56 2024 00:08:50.046 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:50.046 slat (nsec): min=8751, max=58796, avg=14769.87, stdev=3966.95 00:08:50.046 clat (usec): min=146, max=2526, avg=240.99, stdev=114.45 00:08:50.046 lat (usec): min=161, max=2549, avg=255.76, stdev=114.39 00:08:50.046 clat percentiles (usec): 00:08:50.046 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:08:50.046 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 202], 00:08:50.046 | 70.00th=[ 223], 80.00th=[ 359], 90.00th=[ 404], 95.00th=[ 416], 00:08:50.046 | 99.00th=[ 510], 99.50th=[ 586], 99.90th=[ 1156], 99.95th=[ 1221], 00:08:50.046 | 99.99th=[ 2540] 00:08:50.046 write: IOPS=2319, BW=9279KiB/s (9501kB/s)(9288KiB/1001msec); 0 zone resets 00:08:50.046 slat (usec): min=12, max=6398, avg=25.58, stdev=132.47 00:08:50.046 clat (usec): min=103, max=12888, avg=176.68, stdev=272.03 00:08:50.046 lat (usec): min=124, max=12924, avg=202.25, stdev=302.56 00:08:50.046 clat percentiles (usec): 00:08:50.046 | 1.00th=[ 115], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 133], 00:08:50.046 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 155], 00:08:50.046 | 70.00th=[ 178], 80.00th=[ 223], 90.00th=[ 255], 95.00th=[ 273], 00:08:50.046 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[ 1205], 99.95th=[ 1778], 00:08:50.046 | 99.99th=[12911] 00:08:50.046 bw ( KiB/s): min=12288, max=12288, per=32.61%, avg=12288.00, stdev= 0.00, samples=1 00:08:50.046 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:50.046 lat (usec) : 250=81.56%, 500=17.85%, 750=0.39%, 1000=0.05% 00:08:50.046 lat (msec) : 2=0.11%, 4=0.02%, 20=0.02% 00:08:50.046 cpu : usr=1.40%, sys=6.90%, ctx=4375, majf=0, minf=9 00:08:50.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.046 issued rwts: total=2048,2322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.046 00:08:50.046 Run status group 0 (all jobs): 00:08:50.046 READ: bw=33.1MiB/s (34.7MB/s), 8184KiB/s-9339KiB/s (8380kB/s-9563kB/s), io=33.1MiB (34.7MB), run=1001-1001msec 00:08:50.046 WRITE: bw=36.8MiB/s (38.6MB/s), 8899KiB/s-9.99MiB/s (9113kB/s-10.5MB/s), io=36.8MiB (38.6MB), run=1001-1001msec 00:08:50.046 00:08:50.046 Disk stats (read/write): 00:08:50.046 nvme0n1: ios=1910/2048, merge=0/0, ticks=431/357, in_queue=788, util=86.97% 00:08:50.046 nvme0n2: ios=2084/2503, merge=0/0, ticks=428/351, in_queue=779, util=87.09% 00:08:50.046 nvme0n3: ios=1820/2048, merge=0/0, ticks=412/346, in_queue=758, util=88.12% 00:08:50.046 nvme0n4: ios=1859/2048, merge=0/0, ticks=420/355, in_queue=775, util=89.18% 00:08:50.046 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:50.046 [global] 00:08:50.046 thread=1 00:08:50.046 invalidate=1 00:08:50.046 rw=write 00:08:50.046 time_based=1 00:08:50.046 runtime=1 00:08:50.046 ioengine=libaio 00:08:50.046 direct=1 00:08:50.046 bs=4096 00:08:50.046 iodepth=128 00:08:50.046 norandommap=0 00:08:50.046 numjobs=1 00:08:50.046 00:08:50.046 verify_dump=1 00:08:50.046 verify_backlog=512 00:08:50.046 verify_state_save=0 00:08:50.046 do_verify=1 00:08:50.046 verify=crc32c-intel 00:08:50.046 [job0] 00:08:50.046 filename=/dev/nvme0n1 00:08:50.046 [job1] 00:08:50.046 filename=/dev/nvme0n2 00:08:50.046 [job2] 00:08:50.046 filename=/dev/nvme0n3 00:08:50.046 [job3] 00:08:50.046 filename=/dev/nvme0n4 00:08:50.046 Could not set queue depth (nvme0n1) 00:08:50.046 Could not set queue depth (nvme0n2) 00:08:50.046 Could not set queue depth (nvme0n3) 00:08:50.046 Could not set queue depth (nvme0n4) 00:08:50.046 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.046 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.046 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.046 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.046 fio-3.35 00:08:50.046 Starting 4 threads 00:08:51.420 00:08:51.421 job0: (groupid=0, jobs=1): err= 0: pid=71648: Wed Jul 24 17:55:58 2024 00:08:51.421 read: IOPS=5429, BW=21.2MiB/s (22.2MB/s)(21.2MiB/1002msec) 00:08:51.421 slat (usec): min=6, max=2791, avg=88.23, stdev=369.37 00:08:51.421 clat (usec): min=1255, max=14124, avg=11507.74, stdev=1188.29 00:08:51.421 lat (usec): min=1272, max=14682, avg=11595.97, stdev=1158.74 00:08:51.421 clat percentiles (usec): 00:08:51.421 | 1.00th=[ 6783], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11076], 00:08:51.421 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:08:51.421 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12780], 00:08:51.421 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14091], 99.95th=[14091], 00:08:51.421 | 99.99th=[14091] 00:08:51.421 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:08:51.421 slat (usec): min=8, max=3070, avg=84.28, stdev=322.73 00:08:51.421 clat (usec): min=8410, max=14271, avg=11348.79, stdev=1085.03 00:08:51.421 lat (usec): min=8731, max=14291, avg=11433.07, stdev=1071.55 00:08:51.421 clat percentiles (usec): 00:08:51.421 | 1.00th=[ 9241], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:08:51.421 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:08:51.421 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:08:51.421 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:08:51.421 | 99.99th=[14222] 00:08:51.421 bw ( KiB/s): min=21800, max=23302, per=34.03%, avg=22551.00, stdev=1062.07, samples=2 00:08:51.421 iops : min= 5450, max= 5825, avg=5637.50, stdev=265.17, samples=2 00:08:51.421 lat (msec) : 2=0.11%, 4=0.17%, 10=11.90%, 20=87.82% 00:08:51.421 cpu : usr=6.19%, sys=13.19%, ctx=691, majf=0, minf=9 00:08:51.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:51.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.421 issued rwts: total=5440,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.421 job1: (groupid=0, jobs=1): err= 0: pid=71649: Wed Jul 24 17:55:58 2024 00:08:51.421 read: IOPS=5513, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1002msec) 00:08:51.421 slat (usec): min=4, max=8751, avg=89.93, stdev=433.66 00:08:51.421 clat (usec): min=1158, max=18713, avg=11463.95, stdev=1711.83 00:08:51.421 lat (usec): min=3440, max=18737, avg=11553.88, stdev=1741.19 00:08:51.421 clat percentiles (usec): 00:08:51.421 | 1.00th=[ 6390], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10552], 00:08:51.421 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:08:51.421 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13698], 95.00th=[14615], 00:08:51.421 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:08:51.421 | 99.99th=[18744] 00:08:51.421 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:08:51.421 slat (usec): min=8, max=6705, avg=81.82, stdev=356.24 00:08:51.421 clat (usec): min=6580, max=21782, avg=11251.37, stdev=1403.44 00:08:51.421 lat (usec): min=6609, max=21802, avg=11333.19, stdev=1438.41 00:08:51.421 clat percentiles (usec): 00:08:51.421 | 1.00th=[ 7504], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10421], 00:08:51.421 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:08:51.421 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12911], 95.00th=[13829], 00:08:51.421 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16909], 99.95th=[17695], 00:08:51.421 | 99.99th=[21890] 00:08:51.421 bw ( KiB/s): min=22216, max=22885, per=34.03%, avg=22550.50, stdev=473.05, samples=2 00:08:51.421 iops : min= 5554, max= 5721, avg=5637.50, stdev=118.09, samples=2 00:08:51.421 lat (msec) : 2=0.01%, 4=0.29%, 10=11.97%, 20=87.72%, 50=0.01% 00:08:51.421 cpu : usr=4.00%, sys=14.39%, ctx=753, majf=0, minf=17 00:08:51.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:51.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.421 issued rwts: total=5525,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.421 job2: (groupid=0, jobs=1): err= 0: pid=71650: Wed Jul 24 17:55:58 2024 00:08:51.421 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:08:51.421 slat (usec): min=6, max=8916, avg=169.84, stdev=787.61 00:08:51.421 clat (usec): min=15632, max=35551, avg=21642.26, stdev=3278.58 00:08:51.421 lat (usec): min=15654, max=35661, avg=21812.10, stdev=3339.55 00:08:51.421 clat percentiles (usec): 00:08:51.421 | 1.00th=[15795], 5.00th=[17433], 10.00th=[18744], 20.00th=[19268], 00:08:51.421 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[21365], 00:08:51.421 | 70.00th=[22676], 80.00th=[24249], 90.00th=[26608], 95.00th=[28443], 00:08:51.421 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31589], 99.95th=[33817], 00:08:51.421 | 99.99th=[35390] 00:08:51.421 write: IOPS=2811, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1005msec); 0 zone resets 00:08:51.421 slat (usec): min=11, max=9807, avg=190.72, stdev=716.25 00:08:51.421 clat (usec): min=2204, max=38262, avg=25177.92, stdev=5355.12 00:08:51.421 lat (usec): min=4568, max=38292, avg=25368.65, stdev=5387.70 00:08:51.421 clat percentiles (usec): 00:08:51.421 | 1.00th=[ 7308], 5.00th=[17171], 10.00th=[19268], 20.00th=[21103], 00:08:51.421 | 30.00th=[22938], 40.00th=[24249], 50.00th=[25297], 60.00th=[26084], 00:08:51.421 | 70.00th=[26870], 80.00th=[29230], 90.00th=[32637], 95.00th=[34866], 00:08:51.421 | 99.00th=[35914], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:08:51.421 | 99.99th=[38011] 00:08:51.421 bw ( KiB/s): min= 9296, max=12312, per=16.30%, avg=10804.00, stdev=2132.63, samples=2 00:08:51.421 iops : min= 2324, max= 3078, avg=2701.00, stdev=533.16, samples=2 00:08:51.421 lat (msec) : 4=0.02%, 10=0.59%, 20=24.58%, 50=74.81% 00:08:51.421 cpu : usr=2.69%, sys=9.86%, ctx=357, majf=0, minf=15 00:08:51.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:51.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.421 issued rwts: total=2560,2826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.421 job3: (groupid=0, jobs=1): err= 0: pid=71651: Wed Jul 24 17:55:58 2024 00:08:51.421 read: IOPS=2271, BW=9087KiB/s (9305kB/s)(9132KiB/1005msec) 00:08:51.421 slat (usec): min=6, max=10131, avg=186.91, stdev=890.40 00:08:51.421 clat (usec): min=4294, max=36923, avg=23240.50, stdev=4583.46 00:08:51.421 lat (usec): min=4313, max=36958, avg=23427.41, stdev=4561.06 00:08:51.421 clat percentiles (usec): 00:08:51.421 | 1.00th=[10814], 5.00th=[17957], 10.00th=[19530], 20.00th=[20055], 00:08:51.421 | 30.00th=[20841], 40.00th=[21365], 50.00th=[22676], 60.00th=[23462], 00:08:51.421 | 70.00th=[25035], 80.00th=[26346], 90.00th=[29230], 95.00th=[32900], 00:08:51.421 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:08:51.421 | 99.99th=[36963] 00:08:51.421 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:08:51.421 slat (usec): min=13, max=9986, avg=214.43, stdev=783.11 00:08:51.421 clat (usec): min=14411, max=41074, avg=28730.68, stdev=5003.42 00:08:51.421 lat (usec): min=15550, max=41107, avg=28945.11, stdev=4992.51 00:08:51.421 clat percentiles (usec): 00:08:51.421 | 1.00th=[18744], 5.00th=[21103], 10.00th=[22938], 20.00th=[24511], 00:08:51.421 | 30.00th=[25560], 40.00th=[26870], 50.00th=[27919], 60.00th=[29230], 00:08:51.421 | 70.00th=[31851], 80.00th=[33817], 90.00th=[35914], 95.00th=[37487], 00:08:51.421 | 99.00th=[39060], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:08:51.421 | 99.99th=[41157] 00:08:51.421 bw ( KiB/s): min=10112, max=10368, per=15.45%, avg=10240.00, stdev=181.02, samples=2 00:08:51.421 iops : min= 2528, max= 2592, avg=2560.00, stdev=45.25, samples=2 00:08:51.421 lat (msec) : 10=0.37%, 20=9.50%, 50=90.13% 00:08:51.421 cpu : usr=2.39%, sys=8.57%, ctx=322, majf=0, minf=11 00:08:51.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:08:51.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.421 issued rwts: total=2283,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.421 00:08:51.421 Run status group 0 (all jobs): 00:08:51.421 READ: bw=61.4MiB/s (64.4MB/s), 9087KiB/s-21.5MiB/s (9305kB/s-22.6MB/s), io=61.8MiB (64.7MB), run=1002-1005msec 00:08:51.421 WRITE: bw=64.7MiB/s (67.9MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=65.0MiB (68.2MB), run=1002-1005msec 00:08:51.421 00:08:51.421 Disk stats (read/write): 00:08:51.421 nvme0n1: ios=4658/4878, merge=0/0, ticks=12622/12233, in_queue=24855, util=87.68% 00:08:51.421 nvme0n2: ios=4657/4927, merge=0/0, ticks=25140/24578, in_queue=49718, util=87.89% 00:08:51.421 nvme0n3: ios=2048/2479, merge=0/0, ticks=14183/19587, in_queue=33770, util=89.02% 00:08:51.421 nvme0n4: ios=2048/2191, merge=0/0, ticks=11769/14069, in_queue=25838, util=89.69% 00:08:51.421 17:55:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:51.421 [global] 00:08:51.421 thread=1 00:08:51.421 invalidate=1 00:08:51.421 rw=randwrite 00:08:51.421 time_based=1 00:08:51.421 runtime=1 00:08:51.421 ioengine=libaio 00:08:51.421 direct=1 00:08:51.421 bs=4096 00:08:51.421 iodepth=128 00:08:51.421 norandommap=0 00:08:51.421 numjobs=1 00:08:51.421 00:08:51.421 verify_dump=1 00:08:51.421 verify_backlog=512 00:08:51.421 verify_state_save=0 00:08:51.421 do_verify=1 00:08:51.421 verify=crc32c-intel 00:08:51.421 [job0] 00:08:51.421 filename=/dev/nvme0n1 00:08:51.421 [job1] 00:08:51.421 filename=/dev/nvme0n2 00:08:51.421 [job2] 00:08:51.421 filename=/dev/nvme0n3 00:08:51.421 [job3] 00:08:51.421 filename=/dev/nvme0n4 00:08:51.421 Could not set queue depth (nvme0n1) 00:08:51.421 Could not set queue depth (nvme0n2) 00:08:51.421 Could not set queue depth (nvme0n3) 00:08:51.421 Could not set queue depth (nvme0n4) 00:08:51.421 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:51.421 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:51.421 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:51.422 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:51.422 fio-3.35 00:08:51.422 Starting 4 threads 00:08:52.795 00:08:52.795 job0: (groupid=0, jobs=1): err= 0: pid=71710: Wed Jul 24 17:55:59 2024 00:08:52.795 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:08:52.795 slat (usec): min=6, max=5318, avg=84.42, stdev=391.83 00:08:52.795 clat (usec): min=6629, max=16969, avg=11035.22, stdev=1386.78 00:08:52.795 lat (usec): min=6647, max=16983, avg=11119.65, stdev=1416.65 00:08:52.795 clat percentiles (usec): 00:08:52.795 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10159], 00:08:52.795 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:08:52.795 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12780], 95.00th=[13698], 00:08:52.795 | 99.00th=[15008], 99.50th=[15533], 99.90th=[15926], 99.95th=[15926], 00:08:52.795 | 99.99th=[16909] 00:08:52.795 write: IOPS=6072, BW=23.7MiB/s (24.9MB/s)(23.8MiB/1005msec); 0 zone resets 00:08:52.795 slat (usec): min=8, max=4561, avg=78.16, stdev=342.49 00:08:52.795 clat (usec): min=4240, max=16005, avg=10656.23, stdev=1241.21 00:08:52.795 lat (usec): min=4795, max=16022, avg=10734.39, stdev=1269.47 00:08:52.795 clat percentiles (usec): 00:08:52.795 | 1.00th=[ 7111], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10028], 00:08:52.795 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:08:52.795 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11600], 95.00th=[13042], 00:08:52.795 | 99.00th=[14877], 99.50th=[15270], 99.90th=[15664], 99.95th=[15664], 00:08:52.795 | 99.99th=[16057] 00:08:52.795 bw ( KiB/s): min=23185, max=24576, per=35.36%, avg=23880.50, stdev=983.59, samples=2 00:08:52.795 iops : min= 5796, max= 6144, avg=5970.00, stdev=246.07, samples=2 00:08:52.795 lat (msec) : 10=15.79%, 20=84.21% 00:08:52.795 cpu : usr=4.68%, sys=15.64%, ctx=743, majf=0, minf=8 00:08:52.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:52.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.795 issued rwts: total=5632,6103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.795 job1: (groupid=0, jobs=1): err= 0: pid=71711: Wed Jul 24 17:55:59 2024 00:08:52.795 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:08:52.795 slat (usec): min=8, max=10605, avg=152.94, stdev=792.58 00:08:52.795 clat (usec): min=12556, max=32933, avg=19423.02, stdev=4260.83 00:08:52.795 lat (usec): min=13004, max=35249, avg=19575.96, stdev=4319.55 00:08:52.795 clat percentiles (usec): 00:08:52.795 | 1.00th=[13173], 5.00th=[13435], 10.00th=[13698], 20.00th=[15139], 00:08:52.795 | 30.00th=[16909], 40.00th=[17695], 50.00th=[19268], 60.00th=[20579], 00:08:52.795 | 70.00th=[21627], 80.00th=[23462], 90.00th=[25560], 95.00th=[26084], 00:08:52.795 | 99.00th=[31327], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:08:52.795 | 99.99th=[32900] 00:08:52.795 write: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.7MiB/1009msec); 0 zone resets 00:08:52.796 slat (usec): min=11, max=12942, avg=194.33, stdev=696.89 00:08:52.796 clat (usec): min=8074, max=56334, avg=25985.75, stdev=8144.33 00:08:52.796 lat (usec): min=8976, max=56367, avg=26180.08, stdev=8176.14 00:08:52.796 clat percentiles (usec): 00:08:52.796 | 1.00th=[13566], 5.00th=[17957], 10.00th=[18744], 20.00th=[21890], 00:08:52.796 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23987], 00:08:52.796 | 70.00th=[24511], 80.00th=[28443], 90.00th=[39060], 95.00th=[45876], 00:08:52.796 | 99.00th=[53216], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:08:52.796 | 99.99th=[56361] 00:08:52.796 bw ( KiB/s): min=10573, max=12288, per=16.93%, avg=11430.50, stdev=1212.69, samples=2 00:08:52.796 iops : min= 2643, max= 3072, avg=2857.50, stdev=303.35, samples=2 00:08:52.796 lat (msec) : 10=0.16%, 20=33.22%, 50=65.18%, 100=1.44% 00:08:52.796 cpu : usr=2.78%, sys=9.42%, ctx=453, majf=0, minf=11 00:08:52.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:08:52.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.796 issued rwts: total=2560,2991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.796 job2: (groupid=0, jobs=1): err= 0: pid=71712: Wed Jul 24 17:55:59 2024 00:08:52.796 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:08:52.796 slat (usec): min=8, max=13194, avg=184.12, stdev=983.89 00:08:52.796 clat (usec): min=10420, max=49506, avg=20360.12, stdev=7097.66 00:08:52.796 lat (usec): min=10439, max=49557, avg=20544.24, stdev=7195.65 00:08:52.796 clat percentiles (usec): 00:08:52.796 | 1.00th=[11207], 5.00th=[12911], 10.00th=[14222], 20.00th=[15139], 00:08:52.796 | 30.00th=[16057], 40.00th=[16581], 50.00th=[17695], 60.00th=[19530], 00:08:52.796 | 70.00th=[21365], 80.00th=[25560], 90.00th=[28967], 95.00th=[36439], 00:08:52.796 | 99.00th=[45351], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:08:52.796 | 99.99th=[49546] 00:08:52.796 write: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(11.2MiB/1011msec); 0 zone resets 00:08:52.796 slat (usec): min=12, max=14377, avg=175.50, stdev=688.03 00:08:52.796 clat (usec): min=8764, max=63751, avg=26502.13, stdev=9286.44 00:08:52.796 lat (usec): min=10040, max=63779, avg=26677.63, stdev=9342.50 00:08:52.796 clat percentiles (usec): 00:08:52.796 | 1.00th=[12518], 5.00th=[16319], 10.00th=[17957], 20.00th=[21627], 00:08:52.796 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23462], 60.00th=[23987], 00:08:52.796 | 70.00th=[26608], 80.00th=[29754], 90.00th=[41681], 95.00th=[49546], 00:08:52.796 | 99.00th=[57934], 99.50th=[58459], 99.90th=[63701], 99.95th=[63701], 00:08:52.796 | 99.99th=[63701] 00:08:52.796 bw ( KiB/s): min=10027, max=11791, per=16.15%, avg=10909.00, stdev=1247.34, samples=2 00:08:52.796 iops : min= 2506, max= 2947, avg=2726.50, stdev=311.83, samples=2 00:08:52.796 lat (msec) : 10=0.02%, 20=36.81%, 50=61.00%, 100=2.18% 00:08:52.796 cpu : usr=2.97%, sys=9.70%, ctx=391, majf=0, minf=11 00:08:52.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:52.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.796 issued rwts: total=2560,2855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.796 job3: (groupid=0, jobs=1): err= 0: pid=71713: Wed Jul 24 17:55:59 2024 00:08:52.796 read: IOPS=5025, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1002msec) 00:08:52.796 slat (usec): min=6, max=3004, avg=96.14, stdev=438.85 00:08:52.796 clat (usec): min=510, max=15159, avg=12599.40, stdev=1259.20 00:08:52.796 lat (usec): min=2923, max=16998, avg=12695.54, stdev=1203.71 00:08:52.796 clat percentiles (usec): 00:08:52.796 | 1.00th=[ 6325], 5.00th=[10683], 10.00th=[11338], 20.00th=[12256], 00:08:52.796 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:08:52.796 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:08:52.796 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14746], 99.95th=[15139], 00:08:52.796 | 99.99th=[15139] 00:08:52.796 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:08:52.796 slat (usec): min=8, max=3095, avg=93.10, stdev=409.10 00:08:52.796 clat (usec): min=9212, max=15389, avg=12318.40, stdev=1233.03 00:08:52.796 lat (usec): min=9450, max=15411, avg=12411.49, stdev=1218.52 00:08:52.796 clat percentiles (usec): 00:08:52.796 | 1.00th=[ 9896], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:08:52.796 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:08:52.796 | 70.00th=[13042], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:08:52.796 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15401], 99.95th=[15401], 00:08:52.796 | 99.99th=[15401] 00:08:52.796 bw ( KiB/s): min=20439, max=20521, per=30.33%, avg=20480.00, stdev=57.98, samples=2 00:08:52.796 iops : min= 5109, max= 5130, avg=5119.50, stdev=14.85, samples=2 00:08:52.796 lat (usec) : 750=0.01% 00:08:52.796 lat (msec) : 4=0.32%, 10=1.79%, 20=97.88% 00:08:52.796 cpu : usr=3.80%, sys=13.79%, ctx=484, majf=0, minf=15 00:08:52.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:52.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.796 issued rwts: total=5036,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.796 00:08:52.796 Run status group 0 (all jobs): 00:08:52.796 READ: bw=61.0MiB/s (64.0MB/s), 9.89MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=61.7MiB (64.7MB), run=1002-1011msec 00:08:52.796 WRITE: bw=66.0MiB/s (69.2MB/s), 11.0MiB/s-23.7MiB/s (11.6MB/s-24.9MB/s), io=66.7MiB (69.9MB), run=1002-1011msec 00:08:52.796 00:08:52.796 Disk stats (read/write): 00:08:52.796 nvme0n1: ios=4876/5120, merge=0/0, ticks=25087/23445, in_queue=48532, util=87.56% 00:08:52.796 nvme0n2: ios=2221/2560, merge=0/0, ticks=20511/30288, in_queue=50799, util=86.71% 00:08:52.796 nvme0n3: ios=2064/2479, merge=0/0, ticks=21692/28749, in_queue=50441, util=89.06% 00:08:52.796 nvme0n4: ios=4096/4608, merge=0/0, ticks=12188/12401, in_queue=24589, util=89.60% 00:08:52.796 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:52.796 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71726 00:08:52.796 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:52.796 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:52.796 [global] 00:08:52.796 thread=1 00:08:52.796 invalidate=1 00:08:52.796 rw=read 00:08:52.796 time_based=1 00:08:52.796 runtime=10 00:08:52.796 ioengine=libaio 00:08:52.796 direct=1 00:08:52.796 bs=4096 00:08:52.796 iodepth=1 00:08:52.796 norandommap=1 00:08:52.796 numjobs=1 00:08:52.796 00:08:52.796 [job0] 00:08:52.796 filename=/dev/nvme0n1 00:08:52.796 [job1] 00:08:52.796 filename=/dev/nvme0n2 00:08:52.796 [job2] 00:08:52.796 filename=/dev/nvme0n3 00:08:52.796 [job3] 00:08:52.796 filename=/dev/nvme0n4 00:08:52.796 Could not set queue depth (nvme0n1) 00:08:52.796 Could not set queue depth (nvme0n2) 00:08:52.796 Could not set queue depth (nvme0n3) 00:08:52.796 Could not set queue depth (nvme0n4) 00:08:52.796 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.796 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.796 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.796 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.796 fio-3.35 00:08:52.796 Starting 4 threads 00:08:56.080 17:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:56.080 fio: pid=71774, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:56.080 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39313408, buflen=4096 00:08:56.080 17:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:56.080 fio: pid=71773, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:56.080 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=73265152, buflen=4096 00:08:56.080 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.080 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:56.338 fio: pid=71771, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:56.338 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=47939584, buflen=4096 00:08:56.338 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.338 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:56.596 fio: pid=71772, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:56.596 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13033472, buflen=4096 00:08:56.596 00:08:56.596 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71771: Wed Jul 24 17:56:03 2024 00:08:56.596 read: IOPS=3467, BW=13.5MiB/s (14.2MB/s)(45.7MiB/3376msec) 00:08:56.596 slat (usec): min=7, max=17341, avg=17.86, stdev=234.69 00:08:56.596 clat (usec): min=37, max=4162, avg=269.22, stdev=108.03 00:08:56.596 lat (usec): min=114, max=17675, avg=287.08, stdev=261.43 00:08:56.596 clat percentiles (usec): 00:08:56.596 | 1.00th=[ 129], 5.00th=[ 147], 10.00th=[ 163], 20.00th=[ 245], 00:08:56.596 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:08:56.596 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 338], 00:08:56.596 | 99.00th=[ 453], 99.50th=[ 498], 99.90th=[ 1074], 99.95th=[ 3556], 00:08:56.596 | 99.99th=[ 3884] 00:08:56.596 bw ( KiB/s): min=12032, max=14288, per=20.32%, avg=13250.67, stdev=851.47, samples=6 00:08:56.596 iops : min= 3008, max= 3572, avg=3312.67, stdev=212.87, samples=6 00:08:56.596 lat (usec) : 50=0.02%, 250=22.60%, 500=76.90%, 750=0.33%, 1000=0.04% 00:08:56.596 lat (msec) : 2=0.03%, 4=0.07%, 10=0.01% 00:08:56.596 cpu : usr=0.86%, sys=4.27%, ctx=11727, majf=0, minf=1 00:08:56.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.596 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.596 issued rwts: total=11705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.596 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71772: Wed Jul 24 17:56:03 2024 00:08:56.596 read: IOPS=5427, BW=21.2MiB/s (22.2MB/s)(76.4MiB/3605msec) 00:08:56.596 slat (usec): min=6, max=13713, avg=15.20, stdev=160.85 00:08:56.596 clat (nsec): min=1438, max=23247k, avg=167950.08, stdev=180082.66 00:08:56.596 lat (usec): min=103, max=23259, avg=183.15, stdev=242.06 00:08:56.596 clat percentiles (usec): 00:08:56.596 | 1.00th=[ 113], 5.00th=[ 126], 10.00th=[ 139], 20.00th=[ 145], 00:08:56.596 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:08:56.596 | 70.00th=[ 172], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 217], 00:08:56.596 | 99.00th=[ 273], 99.50th=[ 355], 99.90th=[ 627], 99.95th=[ 1418], 00:08:56.596 | 99.99th=[ 6521] 00:08:56.596 bw ( KiB/s): min=17664, max=23992, per=33.49%, avg=21831.67, stdev=2671.71, samples=6 00:08:56.596 iops : min= 4416, max= 5998, avg=5457.83, stdev=667.88, samples=6 00:08:56.596 lat (usec) : 2=0.01%, 4=0.01%, 50=0.01%, 100=0.09%, 250=98.28% 00:08:56.596 lat (usec) : 500=1.46%, 750=0.07%, 1000=0.02% 00:08:56.596 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01%, 50=0.01% 00:08:56.596 cpu : usr=1.17%, sys=5.94%, ctx=19605, majf=0, minf=1 00:08:56.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.596 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.596 issued rwts: total=19567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.596 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71773: Wed Jul 24 17:56:03 2024 00:08:56.596 read: IOPS=5664, BW=22.1MiB/s (23.2MB/s)(69.9MiB/3158msec) 00:08:56.596 slat (usec): min=8, max=12917, avg=12.91, stdev=111.63 00:08:56.596 clat (usec): min=114, max=3706, avg=162.67, stdev=56.46 00:08:56.596 lat (usec): min=124, max=13101, avg=175.58, stdev=125.38 00:08:56.596 clat percentiles (usec): 00:08:56.596 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:08:56.596 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:08:56.596 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 190], 00:08:56.596 | 99.00th=[ 208], 99.50th=[ 219], 99.90th=[ 400], 99.95th=[ 1254], 00:08:56.596 | 99.99th=[ 3490] 00:08:56.596 bw ( KiB/s): min=21120, max=24184, per=34.80%, avg=22686.67, stdev=1315.88, samples=6 00:08:56.596 iops : min= 5280, max= 6046, avg=5671.67, stdev=328.97, samples=6 00:08:56.596 lat (usec) : 250=99.83%, 500=0.07%, 750=0.02%, 1000=0.01% 00:08:56.596 lat (msec) : 2=0.02%, 4=0.03% 00:08:56.596 cpu : usr=0.95%, sys=5.89%, ctx=17892, majf=0, minf=1 00:08:56.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.596 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.596 issued rwts: total=17888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.596 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71774: Wed Jul 24 17:56:03 2024 00:08:56.596 read: IOPS=3291, BW=12.9MiB/s (13.5MB/s)(37.5MiB/2916msec) 00:08:56.596 slat (nsec): min=7683, max=72556, avg=13175.43, stdev=5798.23 00:08:56.596 clat (usec): min=140, max=7316, avg=289.53, stdev=125.24 00:08:56.596 lat (usec): min=150, max=7329, avg=302.71, stdev=125.65 00:08:56.596 clat percentiles (usec): 00:08:56.596 | 1.00th=[ 200], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:08:56.596 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:08:56.596 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 338], 00:08:56.596 | 99.00th=[ 437], 99.50th=[ 490], 99.90th=[ 824], 99.95th=[ 3556], 00:08:56.596 | 99.99th=[ 7308] 00:08:56.596 bw ( KiB/s): min=12280, max=13840, per=20.19%, avg=13160.00, stdev=739.53, samples=5 00:08:56.596 iops : min= 3070, max= 3460, avg=3290.00, stdev=184.88, samples=5 00:08:56.596 lat (usec) : 250=5.22%, 500=94.37%, 750=0.27%, 1000=0.04% 00:08:56.596 lat (msec) : 2=0.01%, 4=0.05%, 10=0.02% 00:08:56.596 cpu : usr=0.79%, sys=3.67%, ctx=9599, majf=0, minf=2 00:08:56.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.596 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.596 issued rwts: total=9599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.596 00:08:56.596 Run status group 0 (all jobs): 00:08:56.596 READ: bw=63.7MiB/s (66.8MB/s), 12.9MiB/s-22.1MiB/s (13.5MB/s-23.2MB/s), io=230MiB (241MB), run=2916-3605msec 00:08:56.596 00:08:56.596 Disk stats (read/write): 00:08:56.596 nvme0n1: ios=11686/0, merge=0/0, ticks=3179/0, in_queue=3179, util=94.96% 00:08:56.596 nvme0n2: ios=18174/0, merge=0/0, ticks=3105/0, in_queue=3105, util=95.70% 00:08:56.596 nvme0n3: ios=17679/0, merge=0/0, ticks=2918/0, in_queue=2918, util=96.21% 00:08:56.596 nvme0n4: ios=9445/0, merge=0/0, ticks=2730/0, in_queue=2730, util=96.46% 00:08:56.596 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.596 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:56.854 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.854 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:57.112 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:57.112 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:57.371 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:57.371 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:57.628 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:57.628 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:57.885 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:57.885 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 71726 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:57.886 nvmf hotplug test: fio failed as expected 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:57.886 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.146 rmmod nvme_tcp 00:08:58.146 rmmod nvme_fabrics 00:08:58.146 rmmod nvme_keyring 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 71245 ']' 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 71245 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 71245 ']' 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 71245 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.146 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71245 00:08:58.421 killing process with pid 71245 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71245' 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 71245 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 71245 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:58.421 00:08:58.421 real 0m19.362s 00:08:58.421 user 1m13.972s 00:08:58.421 sys 0m9.180s 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.421 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.421 ************************************ 00:08:58.421 END TEST nvmf_fio_target 00:08:58.421 ************************************ 00:08:58.680 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:58.680 17:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.680 17:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.680 17:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.680 ************************************ 00:08:58.680 START TEST nvmf_bdevio 00:08:58.680 ************************************ 00:08:58.680 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:58.680 * Looking for test storage... 00:08:58.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.680 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.680 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:58.680 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:58.681 Cannot find device "nvmf_tgt_br" 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.681 Cannot find device "nvmf_tgt_br2" 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:58.681 Cannot find device "nvmf_tgt_br" 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:58.681 Cannot find device "nvmf_tgt_br2" 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:08:58.681 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:58.940 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:58.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:08:58.940 00:08:58.940 --- 10.0.0.2 ping statistics --- 00:08:58.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.941 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:58.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:58.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:58.941 00:08:58.941 --- 10.0.0.3 ping statistics --- 00:08:58.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.941 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:58.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:08:58.941 00:08:58.941 --- 10.0.0.1 ping statistics --- 00:08:58.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.941 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=72095 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 72095 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 72095 ']' 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.941 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.199 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.199 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.199 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:59.199 [2024-07-24 17:56:05.977696] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:08:59.199 [2024-07-24 17:56:05.977818] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.199 [2024-07-24 17:56:06.124685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.457 [2024-07-24 17:56:06.244608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.457 [2024-07-24 17:56:06.244669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.457 [2024-07-24 17:56:06.244684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.457 [2024-07-24 17:56:06.244697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.457 [2024-07-24 17:56:06.244708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.457 [2024-07-24 17:56:06.244842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:59.457 [2024-07-24 17:56:06.245730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:59.457 [2024-07-24 17:56:06.245856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:59.457 [2024-07-24 17:56:06.245927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.073 [2024-07-24 17:56:06.962990] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.073 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.073 Malloc0 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.073 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.073 [2024-07-24 17:56:07.033722] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:00.340 { 00:09:00.340 "params": { 00:09:00.340 "name": "Nvme$subsystem", 00:09:00.340 "trtype": "$TEST_TRANSPORT", 00:09:00.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.340 "adrfam": "ipv4", 00:09:00.340 "trsvcid": "$NVMF_PORT", 00:09:00.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.340 "hdgst": ${hdgst:-false}, 00:09:00.340 "ddgst": ${ddgst:-false} 00:09:00.340 }, 00:09:00.340 "method": "bdev_nvme_attach_controller" 00:09:00.340 } 00:09:00.340 EOF 00:09:00.340 )") 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:00.340 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:00.340 "params": { 00:09:00.340 "name": "Nvme1", 00:09:00.340 "trtype": "tcp", 00:09:00.340 "traddr": "10.0.0.2", 00:09:00.340 "adrfam": "ipv4", 00:09:00.340 "trsvcid": "4420", 00:09:00.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.340 "hdgst": false, 00:09:00.340 "ddgst": false 00:09:00.340 }, 00:09:00.340 "method": "bdev_nvme_attach_controller" 00:09:00.340 }' 00:09:00.340 [2024-07-24 17:56:07.093616] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:09:00.340 [2024-07-24 17:56:07.093713] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72149 ] 00:09:00.340 [2024-07-24 17:56:07.238216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.598 [2024-07-24 17:56:07.343702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.598 [2024-07-24 17:56:07.343750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.598 [2024-07-24 17:56:07.343755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.598 I/O targets: 00:09:00.598 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:00.598 00:09:00.598 00:09:00.598 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.598 http://cunit.sourceforge.net/ 00:09:00.598 00:09:00.598 00:09:00.598 Suite: bdevio tests on: Nvme1n1 00:09:00.598 Test: blockdev write read block ...passed 00:09:00.856 Test: blockdev write zeroes read block ...passed 00:09:00.856 Test: blockdev write zeroes read no split ...passed 00:09:00.856 Test: blockdev write zeroes read split ...passed 00:09:00.856 Test: blockdev write zeroes read split partial ...passed 00:09:00.856 Test: blockdev reset ...[2024-07-24 17:56:07.627388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:00.856 [2024-07-24 17:56:07.627495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee8180 (9): Bad file descriptor 00:09:00.856 [2024-07-24 17:56:07.640299] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.856 passed 00:09:00.856 Test: blockdev write read 8 blocks ...passed 00:09:00.856 Test: blockdev write read size > 128k ...passed 00:09:00.856 Test: blockdev write read invalid size ...passed 00:09:00.856 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.856 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.856 Test: blockdev write read max offset ...passed 00:09:00.856 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.856 Test: blockdev writev readv 8 blocks ...passed 00:09:00.856 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.856 Test: blockdev writev readv block ...passed 00:09:00.856 Test: blockdev writev readv size > 128k ...passed 00:09:00.857 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.857 Test: blockdev comparev and writev ...[2024-07-24 17:56:07.817906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.857 [2024-07-24 17:56:07.818116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:00.857 [2024-07-24 17:56:07.818548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.857 [2024-07-24 17:56:07.818748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:00.857 [2024-07-24 17:56:07.819293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.857 [2024-07-24 17:56:07.819395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:00.857 [2024-07-24 17:56:07.819487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.857 [2024-07-24 17:56:07.820009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:00.857 [2024-07-24 17:56:07.820655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.857 [2024-07-24 17:56:07.820837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:00.857 [2024-07-24 17:56:07.821445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.857 [2024-07-24 17:56:07.821607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:00.857 [2024-07-24 17:56:07.822376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.857 [2024-07-24 17:56:07.822503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:00.857 [2024-07-24 17:56:07.822905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.857 [2024-07-24 17:56:07.823022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:01.115 passed 00:09:01.115 Test: blockdev nvme passthru rw ...passed 00:09:01.115 Test: blockdev nvme passthru vendor specific ...[2024-07-24 17:56:07.906685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:01.115 [2024-07-24 17:56:07.907351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:01.115 [2024-07-24 17:56:07.907733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:01.115 [2024-07-24 17:56:07.907949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:01.115 [2024-07-24 17:56:07.908314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:01.115 [2024-07-24 17:56:07.908561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:01.115 [2024-07-24 17:56:07.908992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:01.115 [2024-07-24 17:56:07.909091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:01.115 passed 00:09:01.115 Test: blockdev nvme admin passthru ...passed 00:09:01.115 Test: blockdev copy ...passed 00:09:01.115 00:09:01.115 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.115 suites 1 1 n/a 0 0 00:09:01.115 tests 23 23 23 0 0 00:09:01.115 asserts 152 152 152 0 n/a 00:09:01.115 00:09:01.115 Elapsed time = 0.902 seconds 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.374 rmmod nvme_tcp 00:09:01.374 rmmod nvme_fabrics 00:09:01.374 rmmod nvme_keyring 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 72095 ']' 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 72095 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 72095 ']' 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 72095 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72095 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:01.374 killing process with pid 72095 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72095' 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 72095 00:09:01.374 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 72095 00:09:01.632 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.632 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.632 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.633 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.633 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.633 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.633 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.633 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.633 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:01.633 ************************************ 00:09:01.633 END TEST nvmf_bdevio 00:09:01.633 ************************************ 00:09:01.633 00:09:01.633 real 0m3.156s 00:09:01.633 user 0m11.129s 00:09:01.633 sys 0m0.820s 00:09:01.633 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.633 17:56:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:01.891 17:56:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:01.891 ************************************ 00:09:01.891 END TEST nvmf_target_core 00:09:01.891 ************************************ 00:09:01.891 00:09:01.891 real 3m29.974s 00:09:01.891 user 10m54.907s 00:09:01.891 sys 1m9.243s 00:09:01.891 17:56:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.891 17:56:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.891 17:56:08 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:01.891 17:56:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.891 17:56:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.891 17:56:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.891 ************************************ 00:09:01.891 START TEST nvmf_target_extra 00:09:01.891 ************************************ 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:01.892 * Looking for test storage... 00:09:01.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:01.892 ************************************ 00:09:01.892 START TEST nvmf_example 00:09:01.892 ************************************ 00:09:01.892 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:01.892 * Looking for test storage... 00:09:02.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.151 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:02.152 Cannot find device "nvmf_tgt_br" 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # true 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:02.152 Cannot find device "nvmf_tgt_br2" 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # true 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:02.152 Cannot find device "nvmf_tgt_br" 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # true 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:02.152 Cannot find device "nvmf_tgt_br2" 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # true 00:09:02.152 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:02.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:02.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:02.152 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:02.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:09:02.418 00:09:02.418 --- 10.0.0.2 ping statistics --- 00:09:02.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.418 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:02.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:09:02.418 00:09:02.418 --- 10.0.0.3 ping statistics --- 00:09:02.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.418 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:02.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:02.418 00:09:02.418 --- 10.0.0.1 ping statistics --- 00:09:02.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.418 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:02.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=72376 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 72376 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 72376 ']' 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.418 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.419 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.419 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.419 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:09:03.794 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:13.803 Initializing NVMe Controllers 00:09:13.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:13.803 Initialization complete. Launching workers. 00:09:13.803 ======================================================== 00:09:13.803 Latency(us) 00:09:13.803 Device Information : IOPS MiB/s Average min max 00:09:13.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15438.04 60.30 4145.24 652.83 23587.42 00:09:13.803 ======================================================== 00:09:13.803 Total : 15438.04 60.30 4145.24 652.83 23587.42 00:09:13.803 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.803 rmmod nvme_tcp 00:09:13.803 rmmod nvme_fabrics 00:09:13.803 rmmod nvme_keyring 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 72376 ']' 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 72376 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 72376 ']' 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 72376 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:13.803 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72376 00:09:14.060 killing process with pid 72376 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72376' 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 72376 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 72376 00:09:14.060 nvmf threads initialize successfully 00:09:14.060 bdev subsystem init successfully 00:09:14.060 created a nvmf target service 00:09:14.060 create targets's poll groups done 00:09:14.060 all subsystems of target started 00:09:14.060 nvmf target is running 00:09:14.060 all subsystems of target stopped 00:09:14.060 destroy targets's poll groups done 00:09:14.060 destroyed the nvmf target service 00:09:14.060 bdev subsystem finish successfully 00:09:14.060 nvmf threads destroy successfully 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.060 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.060 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:14.060 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:14.060 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.060 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.320 00:09:14.320 real 0m12.280s 00:09:14.320 user 0m43.794s 00:09:14.320 sys 0m2.357s 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.320 ************************************ 00:09:14.320 END TEST nvmf_example 00:09:14.320 ************************************ 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:14.320 ************************************ 00:09:14.320 START TEST nvmf_filesystem 00:09:14.320 ************************************ 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:14.320 * Looking for test storage... 00:09:14.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:14.320 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:14.321 #define SPDK_CONFIG_H 00:09:14.321 #define SPDK_CONFIG_APPS 1 00:09:14.321 #define SPDK_CONFIG_ARCH native 00:09:14.321 #undef SPDK_CONFIG_ASAN 00:09:14.321 #define SPDK_CONFIG_AVAHI 1 00:09:14.321 #undef SPDK_CONFIG_CET 00:09:14.321 #define SPDK_CONFIG_COVERAGE 1 00:09:14.321 #define SPDK_CONFIG_CROSS_PREFIX 00:09:14.321 #undef SPDK_CONFIG_CRYPTO 00:09:14.321 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:14.321 #undef SPDK_CONFIG_CUSTOMOCF 00:09:14.321 #undef SPDK_CONFIG_DAOS 00:09:14.321 #define SPDK_CONFIG_DAOS_DIR 00:09:14.321 #define SPDK_CONFIG_DEBUG 1 00:09:14.321 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:14.321 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:14.321 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:14.321 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:14.321 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:14.321 #undef SPDK_CONFIG_DPDK_UADK 00:09:14.321 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:14.321 #define SPDK_CONFIG_EXAMPLES 1 00:09:14.321 #undef SPDK_CONFIG_FC 00:09:14.321 #define SPDK_CONFIG_FC_PATH 00:09:14.321 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:14.321 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:14.321 #undef SPDK_CONFIG_FUSE 00:09:14.321 #undef SPDK_CONFIG_FUZZER 00:09:14.321 #define SPDK_CONFIG_FUZZER_LIB 00:09:14.321 #define SPDK_CONFIG_GOLANG 1 00:09:14.321 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:14.321 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:14.321 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:14.321 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:14.321 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:14.321 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:14.321 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:14.321 #define SPDK_CONFIG_IDXD 1 00:09:14.321 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:14.321 #undef SPDK_CONFIG_IPSEC_MB 00:09:14.321 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:14.321 #define SPDK_CONFIG_ISAL 1 00:09:14.321 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:14.321 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:14.321 #define SPDK_CONFIG_LIBDIR 00:09:14.321 #undef SPDK_CONFIG_LTO 00:09:14.321 #define SPDK_CONFIG_MAX_LCORES 128 00:09:14.321 #define SPDK_CONFIG_NVME_CUSE 1 00:09:14.321 #undef SPDK_CONFIG_OCF 00:09:14.321 #define SPDK_CONFIG_OCF_PATH 00:09:14.321 #define SPDK_CONFIG_OPENSSL_PATH 00:09:14.321 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:14.321 #define SPDK_CONFIG_PGO_DIR 00:09:14.321 #undef SPDK_CONFIG_PGO_USE 00:09:14.321 #define SPDK_CONFIG_PREFIX /usr/local 00:09:14.321 #undef SPDK_CONFIG_RAID5F 00:09:14.321 #undef SPDK_CONFIG_RBD 00:09:14.321 #define SPDK_CONFIG_RDMA 1 00:09:14.321 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:14.321 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:14.321 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:14.321 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:14.321 #define SPDK_CONFIG_SHARED 1 00:09:14.321 #undef SPDK_CONFIG_SMA 00:09:14.321 #define SPDK_CONFIG_TESTS 1 00:09:14.321 #undef SPDK_CONFIG_TSAN 00:09:14.321 #define SPDK_CONFIG_UBLK 1 00:09:14.321 #define SPDK_CONFIG_UBSAN 1 00:09:14.321 #undef SPDK_CONFIG_UNIT_TESTS 00:09:14.321 #undef SPDK_CONFIG_URING 00:09:14.321 #define SPDK_CONFIG_URING_PATH 00:09:14.321 #undef SPDK_CONFIG_URING_ZNS 00:09:14.321 #define SPDK_CONFIG_USDT 1 00:09:14.321 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:14.321 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:14.321 #undef SPDK_CONFIG_VFIO_USER 00:09:14.321 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:14.321 #define SPDK_CONFIG_VHOST 1 00:09:14.321 #define SPDK_CONFIG_VIRTIO 1 00:09:14.321 #undef SPDK_CONFIG_VTUNE 00:09:14.321 #define SPDK_CONFIG_VTUNE_DIR 00:09:14.321 #define SPDK_CONFIG_WERROR 1 00:09:14.321 #define SPDK_CONFIG_WPDK_DIR 00:09:14.321 #undef SPDK_CONFIG_XNVME 00:09:14.321 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.321 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:14.322 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:14.323 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:14.324 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 72617 ]] 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 72617 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.KBH6PM 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.KBH6PM/tests/target /tmp/spdk.KBH6PM 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=devtmpfs 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4194304 00:09:14.583 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4194304 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6257971200 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=2487009280 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=2507157504 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=20148224 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13784199168 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5245882368 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13784199168 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5245882368 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda2 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=843546624 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1012768768 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=100016128 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6267756544 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=135168 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda3 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92499968 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=104607744 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12107776 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1253572608 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253576704 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=89764220928 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9938558976 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:14.584 * Looking for test storage... 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/home 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=13784199168 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == tmpfs ]] 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == ramfs ]] 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ /home == / ]] 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.584 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:14.585 Cannot find device "nvmf_tgt_br" 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.585 Cannot find device "nvmf_tgt_br2" 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:14.585 Cannot find device "nvmf_tgt_br" 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:14.585 Cannot find device "nvmf_tgt_br2" 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:14.585 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:14.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:14.844 00:09:14.844 --- 10.0.0.2 ping statistics --- 00:09:14.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.844 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:14.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:14.844 00:09:14.844 --- 10.0.0.3 ping statistics --- 00:09:14.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.844 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:14.844 00:09:14.844 --- 10.0.0.1 ping statistics --- 00:09:14.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.844 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.844 ************************************ 00:09:14.844 START TEST nvmf_filesystem_no_in_capsule 00:09:14.844 ************************************ 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:14.844 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=72781 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 72781 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 72781 ']' 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.845 17:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.103 [2024-07-24 17:56:21.827095] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:09:15.103 [2024-07-24 17:56:21.827263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.103 [2024-07-24 17:56:21.976581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.361 [2024-07-24 17:56:22.115649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.361 [2024-07-24 17:56:22.115991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.361 [2024-07-24 17:56:22.116160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.361 [2024-07-24 17:56:22.116473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.361 [2024-07-24 17:56:22.116530] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.361 [2024-07-24 17:56:22.116748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.361 [2024-07-24 17:56:22.117024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.361 [2024-07-24 17:56:22.117138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.361 [2024-07-24 17:56:22.117148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.927 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.927 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:15.927 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.927 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.927 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.185 [2024-07-24 17:56:22.952153] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.185 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.185 Malloc1 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.185 [2024-07-24 17:56:23.113990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.185 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:16.186 { 00:09:16.186 "aliases": [ 00:09:16.186 "6c5225e9-e201-48f1-949d-d7d7793b8770" 00:09:16.186 ], 00:09:16.186 "assigned_rate_limits": { 00:09:16.186 "r_mbytes_per_sec": 0, 00:09:16.186 "rw_ios_per_sec": 0, 00:09:16.186 "rw_mbytes_per_sec": 0, 00:09:16.186 "w_mbytes_per_sec": 0 00:09:16.186 }, 00:09:16.186 "block_size": 512, 00:09:16.186 "claim_type": "exclusive_write", 00:09:16.186 "claimed": true, 00:09:16.186 "driver_specific": {}, 00:09:16.186 "memory_domains": [ 00:09:16.186 { 00:09:16.186 "dma_device_id": "system", 00:09:16.186 "dma_device_type": 1 00:09:16.186 }, 00:09:16.186 { 00:09:16.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.186 "dma_device_type": 2 00:09:16.186 } 00:09:16.186 ], 00:09:16.186 "name": "Malloc1", 00:09:16.186 "num_blocks": 1048576, 00:09:16.186 "product_name": "Malloc disk", 00:09:16.186 "supported_io_types": { 00:09:16.186 "abort": true, 00:09:16.186 "compare": false, 00:09:16.186 "compare_and_write": false, 00:09:16.186 "copy": true, 00:09:16.186 "flush": true, 00:09:16.186 "get_zone_info": false, 00:09:16.186 "nvme_admin": false, 00:09:16.186 "nvme_io": false, 00:09:16.186 "nvme_io_md": false, 00:09:16.186 "nvme_iov_md": false, 00:09:16.186 "read": true, 00:09:16.186 "reset": true, 00:09:16.186 "seek_data": false, 00:09:16.186 "seek_hole": false, 00:09:16.186 "unmap": true, 00:09:16.186 "write": true, 00:09:16.186 "write_zeroes": true, 00:09:16.186 "zcopy": true, 00:09:16.186 "zone_append": false, 00:09:16.186 "zone_management": false 00:09:16.186 }, 00:09:16.186 "uuid": "6c5225e9-e201-48f1-949d-d7d7793b8770", 00:09:16.186 "zoned": false 00:09:16.186 } 00:09:16.186 ]' 00:09:16.186 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.477 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.478 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:19.017 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:19.018 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:19.018 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:19.018 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:19.018 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:19.018 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:19.018 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.960 ************************************ 00:09:19.960 START TEST filesystem_ext4 00:09:19.960 ************************************ 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:19.960 mke2fs 1.46.5 (30-Dec-2021) 00:09:19.960 Discarding device blocks: 0/522240 done 00:09:19.960 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:19.960 Filesystem UUID: c7fce8e6-c67f-4125-90d5-6003d5db9f7e 00:09:19.960 Superblock backups stored on blocks: 00:09:19.960 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:19.960 00:09:19.960 Allocating group tables: 0/64 done 00:09:19.960 Writing inode tables: 0/64 done 00:09:19.960 Creating journal (8192 blocks): done 00:09:19.960 Writing superblocks and filesystem accounting information: 0/64 done 00:09:19.960 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:19.960 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 72781 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.961 ************************************ 00:09:19.961 END TEST filesystem_ext4 00:09:19.961 ************************************ 00:09:19.961 00:09:19.961 real 0m0.285s 00:09:19.961 user 0m0.023s 00:09:19.961 sys 0m0.053s 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.961 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 ************************************ 00:09:20.222 START TEST filesystem_btrfs 00:09:20.222 ************************************ 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:20.222 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:20.222 btrfs-progs v6.6.2 00:09:20.222 See https://btrfs.readthedocs.io for more information. 00:09:20.222 00:09:20.222 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:20.222 NOTE: several default settings have changed in version 5.15, please make sure 00:09:20.222 this does not affect your deployments: 00:09:20.222 - DUP for metadata (-m dup) 00:09:20.222 - enabled no-holes (-O no-holes) 00:09:20.222 - enabled free-space-tree (-R free-space-tree) 00:09:20.222 00:09:20.222 Label: (null) 00:09:20.222 UUID: 09c03ed0-cd4a-4ab4-a954-a61f0e42a24f 00:09:20.222 Node size: 16384 00:09:20.222 Sector size: 4096 00:09:20.222 Filesystem size: 510.00MiB 00:09:20.222 Block group profiles: 00:09:20.222 Data: single 8.00MiB 00:09:20.222 Metadata: DUP 32.00MiB 00:09:20.222 System: DUP 8.00MiB 00:09:20.222 SSD detected: yes 00:09:20.222 Zoned device: no 00:09:20.222 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:20.222 Runtime features: free-space-tree 00:09:20.222 Checksum: crc32c 00:09:20.222 Number of devices: 1 00:09:20.222 Devices: 00:09:20.222 ID SIZE PATH 00:09:20.222 1 510.00MiB /dev/nvme0n1p1 00:09:20.222 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 72781 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:20.222 ************************************ 00:09:20.222 END TEST filesystem_btrfs 00:09:20.222 ************************************ 00:09:20.222 00:09:20.222 real 0m0.227s 00:09:20.222 user 0m0.031s 00:09:20.222 sys 0m0.055s 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.222 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:20.479 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:20.479 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:20.479 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.479 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.479 ************************************ 00:09:20.479 START TEST filesystem_xfs 00:09:20.479 ************************************ 00:09:20.479 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:20.479 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:20.480 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:20.480 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:20.480 = sectsz=512 attr=2, projid32bit=1 00:09:20.480 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:20.480 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:20.480 data = bsize=4096 blocks=130560, imaxpct=25 00:09:20.480 = sunit=0 swidth=0 blks 00:09:20.480 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:20.480 log =internal log bsize=4096 blocks=16384, version=2 00:09:20.480 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:20.480 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:21.044 Discarding blocks...Done. 00:09:21.044 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:21.044 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 72781 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:23.570 ************************************ 00:09:23.570 END TEST filesystem_xfs 00:09:23.570 ************************************ 00:09:23.570 00:09:23.570 real 0m3.191s 00:09:23.570 user 0m0.026s 00:09:23.570 sys 0m0.064s 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.570 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 72781 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 72781 ']' 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 72781 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72781 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.829 killing process with pid 72781 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72781' 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 72781 00:09:23.829 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 72781 00:09:24.121 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:24.121 00:09:24.121 real 0m9.207s 00:09:24.121 user 0m34.165s 00:09:24.121 sys 0m2.117s 00:09:24.121 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.121 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 ************************************ 00:09:24.121 END TEST nvmf_filesystem_no_in_capsule 00:09:24.121 ************************************ 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 ************************************ 00:09:24.121 START TEST nvmf_filesystem_in_capsule 00:09:24.121 ************************************ 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=73089 00:09:24.121 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 73089 00:09:24.122 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 73089 ']' 00:09:24.122 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.122 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.122 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.122 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.122 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.122 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.412 [2024-07-24 17:56:31.084337] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:09:24.412 [2024-07-24 17:56:31.084426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.412 [2024-07-24 17:56:31.221722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.412 [2024-07-24 17:56:31.322180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.412 [2024-07-24 17:56:31.322248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.412 [2024-07-24 17:56:31.322258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.412 [2024-07-24 17:56:31.322282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.412 [2024-07-24 17:56:31.322290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.412 [2024-07-24 17:56:31.322467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.412 [2024-07-24 17:56:31.323344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.412 [2024-07-24 17:56:31.323482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.412 [2024-07-24 17:56:31.323485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 [2024-07-24 17:56:31.472001] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 Malloc1 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.671 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.672 [2024-07-24 17:56:31.632038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.672 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:24.930 { 00:09:24.930 "aliases": [ 00:09:24.930 "c9025fb3-6b78-4f90-87f5-038b6159e4a2" 00:09:24.930 ], 00:09:24.930 "assigned_rate_limits": { 00:09:24.930 "r_mbytes_per_sec": 0, 00:09:24.930 "rw_ios_per_sec": 0, 00:09:24.930 "rw_mbytes_per_sec": 0, 00:09:24.930 "w_mbytes_per_sec": 0 00:09:24.930 }, 00:09:24.930 "block_size": 512, 00:09:24.930 "claim_type": "exclusive_write", 00:09:24.930 "claimed": true, 00:09:24.930 "driver_specific": {}, 00:09:24.930 "memory_domains": [ 00:09:24.930 { 00:09:24.930 "dma_device_id": "system", 00:09:24.930 "dma_device_type": 1 00:09:24.930 }, 00:09:24.930 { 00:09:24.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.930 "dma_device_type": 2 00:09:24.930 } 00:09:24.930 ], 00:09:24.930 "name": "Malloc1", 00:09:24.930 "num_blocks": 1048576, 00:09:24.930 "product_name": "Malloc disk", 00:09:24.930 "supported_io_types": { 00:09:24.930 "abort": true, 00:09:24.930 "compare": false, 00:09:24.930 "compare_and_write": false, 00:09:24.930 "copy": true, 00:09:24.930 "flush": true, 00:09:24.930 "get_zone_info": false, 00:09:24.930 "nvme_admin": false, 00:09:24.930 "nvme_io": false, 00:09:24.930 "nvme_io_md": false, 00:09:24.930 "nvme_iov_md": false, 00:09:24.930 "read": true, 00:09:24.930 "reset": true, 00:09:24.930 "seek_data": false, 00:09:24.930 "seek_hole": false, 00:09:24.930 "unmap": true, 00:09:24.930 "write": true, 00:09:24.930 "write_zeroes": true, 00:09:24.930 "zcopy": true, 00:09:24.930 "zone_append": false, 00:09:24.930 "zone_management": false 00:09:24.930 }, 00:09:24.930 "uuid": "c9025fb3-6b78-4f90-87f5-038b6159e4a2", 00:09:24.930 "zoned": false 00:09:24.930 } 00:09:24.930 ]' 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:24.930 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.189 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.189 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.189 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.189 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.189 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:27.166 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:27.166 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.541 ************************************ 00:09:28.541 START TEST filesystem_in_capsule_ext4 00:09:28.541 ************************************ 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:28.541 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:28.542 mke2fs 1.46.5 (30-Dec-2021) 00:09:28.542 Discarding device blocks: 0/522240 done 00:09:28.542 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:28.542 Filesystem UUID: 71d14313-dda6-45c7-b3ae-c170d9ce49d4 00:09:28.542 Superblock backups stored on blocks: 00:09:28.542 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:28.542 00:09:28.542 Allocating group tables: 0/64 done 00:09:28.542 Writing inode tables: 0/64 done 00:09:28.542 Creating journal (8192 blocks): done 00:09:28.542 Writing superblocks and filesystem accounting information: 0/64 done 00:09:28.542 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 73089 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:28.542 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:28.801 ************************************ 00:09:28.801 END TEST filesystem_in_capsule_ext4 00:09:28.801 ************************************ 00:09:28.801 00:09:28.801 real 0m0.390s 00:09:28.801 user 0m0.029s 00:09:28.801 sys 0m0.055s 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.801 ************************************ 00:09:28.801 START TEST filesystem_in_capsule_btrfs 00:09:28.801 ************************************ 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:28.801 btrfs-progs v6.6.2 00:09:28.801 See https://btrfs.readthedocs.io for more information. 00:09:28.801 00:09:28.801 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:28.801 NOTE: several default settings have changed in version 5.15, please make sure 00:09:28.801 this does not affect your deployments: 00:09:28.801 - DUP for metadata (-m dup) 00:09:28.801 - enabled no-holes (-O no-holes) 00:09:28.801 - enabled free-space-tree (-R free-space-tree) 00:09:28.801 00:09:28.801 Label: (null) 00:09:28.801 UUID: 717829ad-38ae-46b3-8c55-a244ecf6f638 00:09:28.801 Node size: 16384 00:09:28.801 Sector size: 4096 00:09:28.801 Filesystem size: 510.00MiB 00:09:28.801 Block group profiles: 00:09:28.801 Data: single 8.00MiB 00:09:28.801 Metadata: DUP 32.00MiB 00:09:28.801 System: DUP 8.00MiB 00:09:28.801 SSD detected: yes 00:09:28.801 Zoned device: no 00:09:28.801 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:28.801 Runtime features: free-space-tree 00:09:28.801 Checksum: crc32c 00:09:28.801 Number of devices: 1 00:09:28.801 Devices: 00:09:28.801 ID SIZE PATH 00:09:28.801 1 510.00MiB /dev/nvme0n1p1 00:09:28.801 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:28.801 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 73089 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:29.064 ************************************ 00:09:29.064 END TEST filesystem_in_capsule_btrfs 00:09:29.064 ************************************ 00:09:29.064 00:09:29.064 real 0m0.216s 00:09:29.064 user 0m0.026s 00:09:29.064 sys 0m0.071s 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.064 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.065 ************************************ 00:09:29.065 START TEST filesystem_in_capsule_xfs 00:09:29.065 ************************************ 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:29.065 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:29.065 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:29.065 = sectsz=512 attr=2, projid32bit=1 00:09:29.065 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:29.065 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:29.065 data = bsize=4096 blocks=130560, imaxpct=25 00:09:29.065 = sunit=0 swidth=0 blks 00:09:29.065 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:29.065 log =internal log bsize=4096 blocks=16384, version=2 00:09:29.065 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:29.065 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:29.632 Discarding blocks...Done. 00:09:29.632 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:29.632 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 73089 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:31.533 00:09:31.533 real 0m2.619s 00:09:31.533 user 0m0.027s 00:09:31.533 sys 0m0.048s 00:09:31.533 ************************************ 00:09:31.533 END TEST filesystem_in_capsule_xfs 00:09:31.533 ************************************ 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:31.533 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 73089 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 73089 ']' 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 73089 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73089 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.791 killing process with pid 73089 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73089' 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 73089 00:09:31.791 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 73089 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:32.358 00:09:32.358 real 0m8.020s 00:09:32.358 user 0m29.492s 00:09:32.358 sys 0m1.934s 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.358 ************************************ 00:09:32.358 END TEST nvmf_filesystem_in_capsule 00:09:32.358 ************************************ 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.358 rmmod nvme_tcp 00:09:32.358 rmmod nvme_fabrics 00:09:32.358 rmmod nvme_keyring 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:32.358 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:32.359 00:09:32.359 real 0m18.060s 00:09:32.359 user 1m3.907s 00:09:32.359 sys 0m4.458s 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.359 ************************************ 00:09:32.359 END TEST nvmf_filesystem 00:09:32.359 ************************************ 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:32.359 ************************************ 00:09:32.359 START TEST nvmf_target_discovery 00:09:32.359 ************************************ 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:32.359 * Looking for test storage... 00:09:32.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.359 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.360 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:32.617 Cannot find device "nvmf_tgt_br" 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.617 Cannot find device "nvmf_tgt_br2" 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:32.617 Cannot find device "nvmf_tgt_br" 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:32.617 Cannot find device "nvmf_tgt_br2" 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.617 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.618 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:32.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:32.876 00:09:32.876 --- 10.0.0.2 ping statistics --- 00:09:32.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.876 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:32.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:09:32.876 00:09:32.876 --- 10.0.0.3 ping statistics --- 00:09:32.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.876 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:32.876 00:09:32.876 --- 10.0.0.1 ping statistics --- 00:09:32.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.876 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=73520 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 73520 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 73520 ']' 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.876 17:56:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.876 [2024-07-24 17:56:39.762697] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:09:32.876 [2024-07-24 17:56:39.763101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.141 [2024-07-24 17:56:39.905298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.141 [2024-07-24 17:56:40.020042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.141 [2024-07-24 17:56:40.020314] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.141 [2024-07-24 17:56:40.020449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.141 [2024-07-24 17:56:40.020562] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.141 [2024-07-24 17:56:40.020600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.141 [2024-07-24 17:56:40.020784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.141 [2024-07-24 17:56:40.021179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.141 [2024-07-24 17:56:40.021265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.141 [2024-07-24 17:56:40.021270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 [2024-07-24 17:56:40.843533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 Null1 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 [2024-07-24 17:56:40.922498] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 Null2 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 Null3 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.075 Null4 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.075 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.076 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 4420 00:09:34.335 00:09:34.335 Discovery Log Number of Records 6, Generation counter 6 00:09:34.335 =====Discovery Log Entry 0====== 00:09:34.335 trtype: tcp 00:09:34.335 adrfam: ipv4 00:09:34.335 subtype: current discovery subsystem 00:09:34.335 treq: not required 00:09:34.335 portid: 0 00:09:34.335 trsvcid: 4420 00:09:34.335 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:34.335 traddr: 10.0.0.2 00:09:34.335 eflags: explicit discovery connections, duplicate discovery information 00:09:34.335 sectype: none 00:09:34.335 =====Discovery Log Entry 1====== 00:09:34.335 trtype: tcp 00:09:34.335 adrfam: ipv4 00:09:34.335 subtype: nvme subsystem 00:09:34.335 treq: not required 00:09:34.335 portid: 0 00:09:34.335 trsvcid: 4420 00:09:34.335 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:34.335 traddr: 10.0.0.2 00:09:34.335 eflags: none 00:09:34.335 sectype: none 00:09:34.335 =====Discovery Log Entry 2====== 00:09:34.335 trtype: tcp 00:09:34.335 adrfam: ipv4 00:09:34.335 subtype: nvme subsystem 00:09:34.335 treq: not required 00:09:34.335 portid: 0 00:09:34.335 trsvcid: 4420 00:09:34.335 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:34.335 traddr: 10.0.0.2 00:09:34.335 eflags: none 00:09:34.335 sectype: none 00:09:34.335 =====Discovery Log Entry 3====== 00:09:34.335 trtype: tcp 00:09:34.335 adrfam: ipv4 00:09:34.335 subtype: nvme subsystem 00:09:34.335 treq: not required 00:09:34.335 portid: 0 00:09:34.335 trsvcid: 4420 00:09:34.335 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:34.335 traddr: 10.0.0.2 00:09:34.335 eflags: none 00:09:34.335 sectype: none 00:09:34.335 =====Discovery Log Entry 4====== 00:09:34.335 trtype: tcp 00:09:34.335 adrfam: ipv4 00:09:34.335 subtype: nvme subsystem 00:09:34.335 treq: not required 00:09:34.335 portid: 0 00:09:34.335 trsvcid: 4420 00:09:34.335 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:34.335 traddr: 10.0.0.2 00:09:34.335 eflags: none 00:09:34.335 sectype: none 00:09:34.335 =====Discovery Log Entry 5====== 00:09:34.335 trtype: tcp 00:09:34.335 adrfam: ipv4 00:09:34.335 subtype: discovery subsystem referral 00:09:34.335 treq: not required 00:09:34.335 portid: 0 00:09:34.335 trsvcid: 4430 00:09:34.335 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:34.335 traddr: 10.0.0.2 00:09:34.335 eflags: none 00:09:34.335 sectype: none 00:09:34.335 Perform nvmf subsystem discovery via RPC 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.335 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.335 [ 00:09:34.335 { 00:09:34.335 "allow_any_host": true, 00:09:34.335 "hosts": [], 00:09:34.335 "listen_addresses": [ 00:09:34.335 { 00:09:34.335 "adrfam": "IPv4", 00:09:34.335 "traddr": "10.0.0.2", 00:09:34.335 "trsvcid": "4420", 00:09:34.335 "trtype": "TCP" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:34.335 "subtype": "Discovery" 00:09:34.335 }, 00:09:34.335 { 00:09:34.335 "allow_any_host": true, 00:09:34.335 "hosts": [], 00:09:34.335 "listen_addresses": [ 00:09:34.335 { 00:09:34.335 "adrfam": "IPv4", 00:09:34.335 "traddr": "10.0.0.2", 00:09:34.335 "trsvcid": "4420", 00:09:34.335 "trtype": "TCP" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "max_cntlid": 65519, 00:09:34.335 "max_namespaces": 32, 00:09:34.335 "min_cntlid": 1, 00:09:34.335 "model_number": "SPDK bdev Controller", 00:09:34.335 "namespaces": [ 00:09:34.335 { 00:09:34.335 "bdev_name": "Null1", 00:09:34.335 "name": "Null1", 00:09:34.335 "nguid": "5E2C097CEB7C44BE867796E1AF2F7728", 00:09:34.335 "nsid": 1, 00:09:34.335 "uuid": "5e2c097c-eb7c-44be-8677-96e1af2f7728" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.335 "serial_number": "SPDK00000000000001", 00:09:34.335 "subtype": "NVMe" 00:09:34.335 }, 00:09:34.335 { 00:09:34.335 "allow_any_host": true, 00:09:34.335 "hosts": [], 00:09:34.335 "listen_addresses": [ 00:09:34.335 { 00:09:34.335 "adrfam": "IPv4", 00:09:34.335 "traddr": "10.0.0.2", 00:09:34.335 "trsvcid": "4420", 00:09:34.335 "trtype": "TCP" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "max_cntlid": 65519, 00:09:34.335 "max_namespaces": 32, 00:09:34.335 "min_cntlid": 1, 00:09:34.335 "model_number": "SPDK bdev Controller", 00:09:34.335 "namespaces": [ 00:09:34.335 { 00:09:34.335 "bdev_name": "Null2", 00:09:34.335 "name": "Null2", 00:09:34.335 "nguid": "74955578A8D04BEA80B3057E4E5D5C23", 00:09:34.335 "nsid": 1, 00:09:34.335 "uuid": "74955578-a8d0-4bea-80b3-057e4e5d5c23" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:34.335 "serial_number": "SPDK00000000000002", 00:09:34.335 "subtype": "NVMe" 00:09:34.335 }, 00:09:34.335 { 00:09:34.335 "allow_any_host": true, 00:09:34.335 "hosts": [], 00:09:34.335 "listen_addresses": [ 00:09:34.335 { 00:09:34.335 "adrfam": "IPv4", 00:09:34.335 "traddr": "10.0.0.2", 00:09:34.335 "trsvcid": "4420", 00:09:34.335 "trtype": "TCP" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "max_cntlid": 65519, 00:09:34.335 "max_namespaces": 32, 00:09:34.335 "min_cntlid": 1, 00:09:34.335 "model_number": "SPDK bdev Controller", 00:09:34.335 "namespaces": [ 00:09:34.335 { 00:09:34.335 "bdev_name": "Null3", 00:09:34.335 "name": "Null3", 00:09:34.335 "nguid": "507B431459644B0F8B0B6D493EDD0057", 00:09:34.335 "nsid": 1, 00:09:34.335 "uuid": "507b4314-5964-4b0f-8b0b-6d493edd0057" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:34.335 "serial_number": "SPDK00000000000003", 00:09:34.335 "subtype": "NVMe" 00:09:34.335 }, 00:09:34.335 { 00:09:34.335 "allow_any_host": true, 00:09:34.335 "hosts": [], 00:09:34.335 "listen_addresses": [ 00:09:34.335 { 00:09:34.335 "adrfam": "IPv4", 00:09:34.335 "traddr": "10.0.0.2", 00:09:34.335 "trsvcid": "4420", 00:09:34.335 "trtype": "TCP" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "max_cntlid": 65519, 00:09:34.335 "max_namespaces": 32, 00:09:34.335 "min_cntlid": 1, 00:09:34.335 "model_number": "SPDK bdev Controller", 00:09:34.335 "namespaces": [ 00:09:34.335 { 00:09:34.335 "bdev_name": "Null4", 00:09:34.335 "name": "Null4", 00:09:34.335 "nguid": "90E26B4023DF4E1BB2E1A672BF37193E", 00:09:34.335 "nsid": 1, 00:09:34.335 "uuid": "90e26b40-23df-4e1b-b2e1-a672bf37193e" 00:09:34.335 } 00:09:34.335 ], 00:09:34.335 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:34.335 "serial_number": "SPDK00000000000004", 00:09:34.335 "subtype": "NVMe" 00:09:34.335 } 00:09:34.335 ] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.336 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.595 rmmod nvme_tcp 00:09:34.595 rmmod nvme_fabrics 00:09:34.595 rmmod nvme_keyring 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 73520 ']' 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 73520 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 73520 ']' 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 73520 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73520 00:09:34.595 killing process with pid 73520 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73520' 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 73520 00:09:34.595 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 73520 00:09:34.853 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.853 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.853 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:34.854 00:09:34.854 real 0m2.394s 00:09:34.854 user 0m6.552s 00:09:34.854 sys 0m0.576s 00:09:34.854 ************************************ 00:09:34.854 END TEST nvmf_target_discovery 00:09:34.854 ************************************ 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:34.854 ************************************ 00:09:34.854 START TEST nvmf_referrals 00:09:34.854 ************************************ 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:34.854 * Looking for test storage... 00:09:34.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.854 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:34.855 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:34.855 Cannot find device "nvmf_tgt_br" 00:09:34.855 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:09:34.855 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.113 Cannot find device "nvmf_tgt_br2" 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:35.113 Cannot find device "nvmf_tgt_br" 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:35.113 Cannot find device "nvmf_tgt_br2" 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:35.113 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.113 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:35.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:35.371 00:09:35.371 --- 10.0.0.2 ping statistics --- 00:09:35.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.371 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:35.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:35.371 00:09:35.371 --- 10.0.0.3 ping statistics --- 00:09:35.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.371 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:09:35.371 00:09:35.371 --- 10.0.0.1 ping statistics --- 00:09:35.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.371 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.371 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=73751 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 73751 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 73751 ']' 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.372 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.372 [2024-07-24 17:56:42.210880] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:09:35.372 [2024-07-24 17:56:42.210977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.630 [2024-07-24 17:56:42.350597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.630 [2024-07-24 17:56:42.474369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.630 [2024-07-24 17:56:42.474650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.630 [2024-07-24 17:56:42.474712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.630 [2024-07-24 17:56:42.474778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.630 [2024-07-24 17:56:42.474825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.630 [2024-07-24 17:56:42.474952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.630 [2024-07-24 17:56:42.475537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.630 [2024-07-24 17:56:42.475612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.630 [2024-07-24 17:56:42.475685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 [2024-07-24 17:56:43.327061] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 [2024-07-24 17:56:43.352849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.563 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.820 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:36.821 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.077 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:37.077 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.078 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:37.078 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.334 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:37.334 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:37.334 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.335 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.591 rmmod nvme_tcp 00:09:37.591 rmmod nvme_fabrics 00:09:37.591 rmmod nvme_keyring 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 73751 ']' 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 73751 00:09:37.591 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 73751 ']' 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 73751 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73751 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.592 killing process with pid 73751 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73751' 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 73751 00:09:37.592 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 73751 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:37.849 00:09:37.849 real 0m3.121s 00:09:37.849 user 0m10.065s 00:09:37.849 sys 0m0.901s 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.849 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.849 ************************************ 00:09:37.849 END TEST nvmf_referrals 00:09:37.849 ************************************ 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:38.106 ************************************ 00:09:38.106 START TEST nvmf_connect_disconnect 00:09:38.106 ************************************ 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:38.106 * Looking for test storage... 00:09:38.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.106 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:38.107 Cannot find device "nvmf_tgt_br" 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:09:38.107 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.107 Cannot find device "nvmf_tgt_br2" 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:38.107 Cannot find device "nvmf_tgt_br" 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:38.107 Cannot find device "nvmf_tgt_br2" 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:09:38.107 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:38.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:09:38.365 00:09:38.365 --- 10.0.0.2 ping statistics --- 00:09:38.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.365 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:38.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:09:38.365 00:09:38.365 --- 10.0.0.3 ping statistics --- 00:09:38.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.365 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:38.365 00:09:38.365 --- 10.0.0.1 ping statistics --- 00:09:38.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.365 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=74055 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 74055 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 74055 ']' 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.365 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.366 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.366 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.366 17:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.623 [2024-07-24 17:56:45.399304] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:09:38.623 [2024-07-24 17:56:45.399420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.623 [2024-07-24 17:56:45.541988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.880 [2024-07-24 17:56:45.658620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.880 [2024-07-24 17:56:45.658674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.880 [2024-07-24 17:56:45.658686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.880 [2024-07-24 17:56:45.658696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.880 [2024-07-24 17:56:45.658704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.880 [2024-07-24 17:56:45.659793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.880 [2024-07-24 17:56:45.659854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.880 [2024-07-24 17:56:45.659921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.880 [2024-07-24 17:56:45.659927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.812 [2024-07-24 17:56:46.567333] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.812 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.813 [2024-07-24 17:56:46.639625] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:39.813 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:42.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.235 17:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:51.235 17:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:51.235 17:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.235 17:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:51.235 17:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:51.235 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:51.235 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:51.236 rmmod nvme_tcp 00:09:51.236 rmmod nvme_fabrics 00:09:51.236 rmmod nvme_keyring 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 74055 ']' 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 74055 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 74055 ']' 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 74055 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74055 00:09:51.236 killing process with pid 74055 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74055' 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 74055 00:09:51.236 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 74055 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:51.543 00:09:51.543 real 0m13.500s 00:09:51.543 user 0m48.786s 00:09:51.543 sys 0m2.654s 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.543 ************************************ 00:09:51.543 END TEST nvmf_connect_disconnect 00:09:51.543 ************************************ 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:51.543 ************************************ 00:09:51.543 START TEST nvmf_multitarget 00:09:51.543 ************************************ 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:51.543 * Looking for test storage... 00:09:51.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.543 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:51.544 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:51.803 Cannot find device "nvmf_tgt_br" 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:51.803 Cannot find device "nvmf_tgt_br2" 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:51.803 Cannot find device "nvmf_tgt_br" 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:51.803 Cannot find device "nvmf_tgt_br2" 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:51.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:51.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:51.803 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:52.062 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:52.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:09:52.063 00:09:52.063 --- 10.0.0.2 ping statistics --- 00:09:52.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.063 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:52.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:52.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:09:52.063 00:09:52.063 --- 10.0.0.3 ping statistics --- 00:09:52.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.063 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:52.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:09:52.063 00:09:52.063 --- 10.0.0.1 ping statistics --- 00:09:52.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.063 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=74453 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 74453 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 74453 ']' 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.063 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:52.063 [2024-07-24 17:56:58.975497] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:09:52.063 [2024-07-24 17:56:58.975612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.321 [2024-07-24 17:56:59.118857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.321 [2024-07-24 17:56:59.225388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.321 [2024-07-24 17:56:59.225622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.321 [2024-07-24 17:56:59.225773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.321 [2024-07-24 17:56:59.225789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.321 [2024-07-24 17:56:59.225797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.321 [2024-07-24 17:56:59.225917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.321 [2024-07-24 17:56:59.226442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.321 [2024-07-24 17:56:59.226488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.321 [2024-07-24 17:56:59.226490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:53.256 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:53.513 "nvmf_tgt_1" 00:09:53.513 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:53.772 "nvmf_tgt_2" 00:09:53.772 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:53.772 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:53.772 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:53.772 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:54.032 true 00:09:54.032 17:57:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:54.289 true 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.289 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.289 rmmod nvme_tcp 00:09:54.289 rmmod nvme_fabrics 00:09:54.547 rmmod nvme_keyring 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 74453 ']' 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 74453 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 74453 ']' 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 74453 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74453 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.547 killing process with pid 74453 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74453' 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 74453 00:09:54.547 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 74453 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:54.807 00:09:54.807 real 0m3.183s 00:09:54.807 user 0m10.561s 00:09:54.807 sys 0m0.776s 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.807 ************************************ 00:09:54.807 END TEST nvmf_multitarget 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 ************************************ 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 ************************************ 00:09:54.807 START TEST nvmf_rpc 00:09:54.807 ************************************ 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:54.807 * Looking for test storage... 00:09:54.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.807 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:54.808 Cannot find device "nvmf_tgt_br" 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:09:54.808 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.067 Cannot find device "nvmf_tgt_br2" 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:55.067 Cannot find device "nvmf_tgt_br" 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:55.067 Cannot find device "nvmf_tgt_br2" 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:55.067 17:57:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:55.067 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:55.067 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:55.067 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:55.326 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:55.326 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:55.326 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:55.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:09:55.326 00:09:55.326 --- 10.0.0.2 ping statistics --- 00:09:55.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.326 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:55.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:55.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:55.327 00:09:55.327 --- 10.0.0.3 ping statistics --- 00:09:55.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.327 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:55.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:55.327 00:09:55.327 --- 10.0.0.1 ping statistics --- 00:09:55.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.327 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=74685 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 74685 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 74685 ']' 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.327 17:57:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.327 [2024-07-24 17:57:02.146658] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:09:55.327 [2024-07-24 17:57:02.146765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.327 [2024-07-24 17:57:02.287113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.617 [2024-07-24 17:57:02.406134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.617 [2024-07-24 17:57:02.406191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.617 [2024-07-24 17:57:02.406203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.617 [2024-07-24 17:57:02.406213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.617 [2024-07-24 17:57:02.406221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.617 [2024-07-24 17:57:02.406332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.617 [2024-07-24 17:57:02.406385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.617 [2024-07-24 17:57:02.409107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.617 [2024-07-24 17:57:02.409115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.551 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.551 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:56.552 "poll_groups": [ 00:09:56.552 { 00:09:56.552 "admin_qpairs": 0, 00:09:56.552 "completed_nvme_io": 0, 00:09:56.552 "current_admin_qpairs": 0, 00:09:56.552 "current_io_qpairs": 0, 00:09:56.552 "io_qpairs": 0, 00:09:56.552 "name": "nvmf_tgt_poll_group_000", 00:09:56.552 "pending_bdev_io": 0, 00:09:56.552 "transports": [] 00:09:56.552 }, 00:09:56.552 { 00:09:56.552 "admin_qpairs": 0, 00:09:56.552 "completed_nvme_io": 0, 00:09:56.552 "current_admin_qpairs": 0, 00:09:56.552 "current_io_qpairs": 0, 00:09:56.552 "io_qpairs": 0, 00:09:56.552 "name": "nvmf_tgt_poll_group_001", 00:09:56.552 "pending_bdev_io": 0, 00:09:56.552 "transports": [] 00:09:56.552 }, 00:09:56.552 { 00:09:56.552 "admin_qpairs": 0, 00:09:56.552 "completed_nvme_io": 0, 00:09:56.552 "current_admin_qpairs": 0, 00:09:56.552 "current_io_qpairs": 0, 00:09:56.552 "io_qpairs": 0, 00:09:56.552 "name": "nvmf_tgt_poll_group_002", 00:09:56.552 "pending_bdev_io": 0, 00:09:56.552 "transports": [] 00:09:56.552 }, 00:09:56.552 { 00:09:56.552 "admin_qpairs": 0, 00:09:56.552 "completed_nvme_io": 0, 00:09:56.552 "current_admin_qpairs": 0, 00:09:56.552 "current_io_qpairs": 0, 00:09:56.552 "io_qpairs": 0, 00:09:56.552 "name": "nvmf_tgt_poll_group_003", 00:09:56.552 "pending_bdev_io": 0, 00:09:56.552 "transports": [] 00:09:56.552 } 00:09:56.552 ], 00:09:56.552 "tick_rate": 2100000000 00:09:56.552 }' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 [2024-07-24 17:57:03.301383] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:56.552 "poll_groups": [ 00:09:56.552 { 00:09:56.552 "admin_qpairs": 0, 00:09:56.552 "completed_nvme_io": 0, 00:09:56.552 "current_admin_qpairs": 0, 00:09:56.552 "current_io_qpairs": 0, 00:09:56.552 "io_qpairs": 0, 00:09:56.552 "name": "nvmf_tgt_poll_group_000", 00:09:56.552 "pending_bdev_io": 0, 00:09:56.552 "transports": [ 00:09:56.552 { 00:09:56.552 "trtype": "TCP" 00:09:56.552 } 00:09:56.552 ] 00:09:56.552 }, 00:09:56.552 { 00:09:56.552 "admin_qpairs": 0, 00:09:56.552 "completed_nvme_io": 0, 00:09:56.552 "current_admin_qpairs": 0, 00:09:56.552 "current_io_qpairs": 0, 00:09:56.552 "io_qpairs": 0, 00:09:56.552 "name": "nvmf_tgt_poll_group_001", 00:09:56.552 "pending_bdev_io": 0, 00:09:56.552 "transports": [ 00:09:56.552 { 00:09:56.552 "trtype": "TCP" 00:09:56.552 } 00:09:56.552 ] 00:09:56.552 }, 00:09:56.552 { 00:09:56.552 "admin_qpairs": 0, 00:09:56.552 "completed_nvme_io": 0, 00:09:56.552 "current_admin_qpairs": 0, 00:09:56.552 "current_io_qpairs": 0, 00:09:56.552 "io_qpairs": 0, 00:09:56.552 "name": "nvmf_tgt_poll_group_002", 00:09:56.552 "pending_bdev_io": 0, 00:09:56.552 "transports": [ 00:09:56.552 { 00:09:56.552 "trtype": "TCP" 00:09:56.552 } 00:09:56.552 ] 00:09:56.552 }, 00:09:56.552 { 00:09:56.552 "admin_qpairs": 0, 00:09:56.552 "completed_nvme_io": 0, 00:09:56.552 "current_admin_qpairs": 0, 00:09:56.552 "current_io_qpairs": 0, 00:09:56.552 "io_qpairs": 0, 00:09:56.552 "name": "nvmf_tgt_poll_group_003", 00:09:56.552 "pending_bdev_io": 0, 00:09:56.552 "transports": [ 00:09:56.552 { 00:09:56.552 "trtype": "TCP" 00:09:56.552 } 00:09:56.552 ] 00:09:56.552 } 00:09:56.552 ], 00:09:56.552 "tick_rate": 2100000000 00:09:56.552 }' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 Malloc1 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.552 [2024-07-24 17:57:03.489690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -a 10.0.0.2 -s 4420 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -a 10.0.0.2 -s 4420 00:09:56.552 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:09:56.553 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.553 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:09:56.553 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.553 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:09:56.553 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.553 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:09:56.553 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:09:56.553 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -a 10.0.0.2 -s 4420 00:09:56.553 [2024-07-24 17:57:03.517971] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee' 00:09:56.553 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:56.553 could not add new controller: failed to write to nvme-fabrics device 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:56.811 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.342 [2024-07-24 17:57:05.819334] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee' 00:09:59.342 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:59.342 could not add new controller: failed to write to nvme-fabrics device 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:59.342 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:01.276 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.276 [2024-07-24 17:57:08.129326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.276 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.277 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:01.536 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:01.536 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:01.536 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.536 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:01.536 17:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.498 [2024-07-24 17:57:10.436949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.498 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.758 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:03.758 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:03.758 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.758 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:03.758 17:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:05.661 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.919 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.920 [2024-07-24 17:57:12.853576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.920 17:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.179 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.179 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:06.179 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.179 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:06.179 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:08.081 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:08.081 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:08.081 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.340 [2024-07-24 17:57:15.173673] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.340 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.599 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.600 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:08.600 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.600 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:08.600 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.546 [2024-07-24 17:57:17.465455] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.546 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.804 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:10.804 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:10.804 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.804 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:10.804 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:12.698 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:12.698 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:12.698 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:12.698 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:12.698 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:12.698 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:12.698 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 [2024-07-24 17:57:19.756892] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 [2024-07-24 17:57:19.804950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.956 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 [2024-07-24 17:57:19.852998] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 [2024-07-24 17:57:19.901020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.957 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.214 [2024-07-24 17:57:19.949086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.214 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.215 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:13.215 "poll_groups": [ 00:10:13.215 { 00:10:13.215 "admin_qpairs": 2, 00:10:13.215 "completed_nvme_io": 165, 00:10:13.215 "current_admin_qpairs": 0, 00:10:13.215 "current_io_qpairs": 0, 00:10:13.215 "io_qpairs": 16, 00:10:13.215 "name": "nvmf_tgt_poll_group_000", 00:10:13.215 "pending_bdev_io": 0, 00:10:13.215 "transports": [ 00:10:13.215 { 00:10:13.215 "trtype": "TCP" 00:10:13.215 } 00:10:13.215 ] 00:10:13.215 }, 00:10:13.215 { 00:10:13.215 "admin_qpairs": 3, 00:10:13.215 "completed_nvme_io": 67, 00:10:13.215 "current_admin_qpairs": 0, 00:10:13.215 "current_io_qpairs": 0, 00:10:13.215 "io_qpairs": 17, 00:10:13.215 "name": "nvmf_tgt_poll_group_001", 00:10:13.215 "pending_bdev_io": 0, 00:10:13.215 "transports": [ 00:10:13.215 { 00:10:13.215 "trtype": "TCP" 00:10:13.215 } 00:10:13.215 ] 00:10:13.215 }, 00:10:13.215 { 00:10:13.215 "admin_qpairs": 1, 00:10:13.215 "completed_nvme_io": 71, 00:10:13.215 "current_admin_qpairs": 0, 00:10:13.215 "current_io_qpairs": 0, 00:10:13.215 "io_qpairs": 19, 00:10:13.215 "name": "nvmf_tgt_poll_group_002", 00:10:13.215 "pending_bdev_io": 0, 00:10:13.215 "transports": [ 00:10:13.215 { 00:10:13.215 "trtype": "TCP" 00:10:13.215 } 00:10:13.215 ] 00:10:13.215 }, 00:10:13.215 { 00:10:13.215 "admin_qpairs": 1, 00:10:13.215 "completed_nvme_io": 117, 00:10:13.215 "current_admin_qpairs": 0, 00:10:13.215 "current_io_qpairs": 0, 00:10:13.215 "io_qpairs": 18, 00:10:13.215 "name": "nvmf_tgt_poll_group_003", 00:10:13.215 "pending_bdev_io": 0, 00:10:13.215 "transports": [ 00:10:13.215 { 00:10:13.215 "trtype": "TCP" 00:10:13.215 } 00:10:13.215 ] 00:10:13.215 } 00:10:13.215 ], 00:10:13.215 "tick_rate": 2100000000 00:10:13.215 }' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:13.215 rmmod nvme_tcp 00:10:13.215 rmmod nvme_fabrics 00:10:13.215 rmmod nvme_keyring 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 74685 ']' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 74685 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 74685 ']' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 74685 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74685 00:10:13.215 killing process with pid 74685 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74685' 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 74685 00:10:13.215 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 74685 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:13.473 00:10:13.473 real 0m18.811s 00:10:13.473 user 1m9.341s 00:10:13.473 sys 0m3.742s 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.473 ************************************ 00:10:13.473 END TEST nvmf_rpc 00:10:13.473 ************************************ 00:10:13.473 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:13.731 ************************************ 00:10:13.731 START TEST nvmf_invalid 00:10:13.731 ************************************ 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:13.731 * Looking for test storage... 00:10:13.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:13.731 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:13.732 Cannot find device "nvmf_tgt_br" 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.732 Cannot find device "nvmf_tgt_br2" 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:13.732 Cannot find device "nvmf_tgt_br" 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:13.732 Cannot find device "nvmf_tgt_br2" 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:13.732 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:13.990 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:13.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:13.990 00:10:13.990 --- 10.0.0.2 ping statistics --- 00:10:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.991 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:13.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:13.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:10:13.991 00:10:13.991 --- 10.0.0.3 ping statistics --- 00:10:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.991 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:13.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:13.991 00:10:13.991 --- 10.0.0.1 ping statistics --- 00:10:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.991 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=75196 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 75196 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 75196 ']' 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.991 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:14.299 [2024-07-24 17:57:20.990629] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:10:14.299 [2024-07-24 17:57:20.990702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.299 [2024-07-24 17:57:21.121345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.299 [2024-07-24 17:57:21.236680] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.299 [2024-07-24 17:57:21.236942] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.299 [2024-07-24 17:57:21.237072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.299 [2024-07-24 17:57:21.237187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.299 [2024-07-24 17:57:21.237228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.299 [2024-07-24 17:57:21.237446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.299 [2024-07-24 17:57:21.237582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.299 [2024-07-24 17:57:21.237625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.299 [2024-07-24 17:57:21.237631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.234 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.234 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:10:15.234 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.234 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.234 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:15.234 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.234 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:15.234 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6338 00:10:15.234 [2024-07-24 17:57:22.143460] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:15.234 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/24 17:57:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6338 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:10:15.234 request: 00:10:15.234 { 00:10:15.234 "method": "nvmf_create_subsystem", 00:10:15.234 "params": { 00:10:15.234 "nqn": "nqn.2016-06.io.spdk:cnode6338", 00:10:15.234 "tgt_name": "foobar" 00:10:15.234 } 00:10:15.234 } 00:10:15.234 Got JSON-RPC error response 00:10:15.234 GoRPCClient: error on JSON-RPC call' 00:10:15.234 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/24 17:57:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6338 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:10:15.234 request: 00:10:15.234 { 00:10:15.234 "method": "nvmf_create_subsystem", 00:10:15.234 "params": { 00:10:15.234 "nqn": "nqn.2016-06.io.spdk:cnode6338", 00:10:15.234 "tgt_name": "foobar" 00:10:15.234 } 00:10:15.234 } 00:10:15.234 Got JSON-RPC error response 00:10:15.234 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:15.234 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:15.234 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21545 00:10:15.492 [2024-07-24 17:57:22.355679] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21545: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:15.492 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/24 17:57:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21545 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:10:15.492 request: 00:10:15.492 { 00:10:15.492 "method": "nvmf_create_subsystem", 00:10:15.492 "params": { 00:10:15.492 "nqn": "nqn.2016-06.io.spdk:cnode21545", 00:10:15.492 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:10:15.492 } 00:10:15.492 } 00:10:15.492 Got JSON-RPC error response 00:10:15.492 GoRPCClient: error on JSON-RPC call' 00:10:15.492 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/24 17:57:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21545 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:10:15.492 request: 00:10:15.492 { 00:10:15.492 "method": "nvmf_create_subsystem", 00:10:15.492 "params": { 00:10:15.492 "nqn": "nqn.2016-06.io.spdk:cnode21545", 00:10:15.492 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:10:15.492 } 00:10:15.492 } 00:10:15.492 Got JSON-RPC error response 00:10:15.492 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:15.492 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:15.492 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29883 00:10:15.752 [2024-07-24 17:57:22.616112] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29883: invalid model number 'SPDK_Controller' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/24 17:57:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29883], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:10:15.752 request: 00:10:15.752 { 00:10:15.752 "method": "nvmf_create_subsystem", 00:10:15.752 "params": { 00:10:15.752 "nqn": "nqn.2016-06.io.spdk:cnode29883", 00:10:15.752 "model_number": "SPDK_Controller\u001f" 00:10:15.752 } 00:10:15.752 } 00:10:15.752 Got JSON-RPC error response 00:10:15.752 GoRPCClient: error on JSON-RPC call' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/24 17:57:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29883], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:10:15.752 request: 00:10:15.752 { 00:10:15.752 "method": "nvmf_create_subsystem", 00:10:15.752 "params": { 00:10:15.752 "nqn": "nqn.2016-06.io.spdk:cnode29883", 00:10:15.752 "model_number": "SPDK_Controller\u001f" 00:10:15.752 } 00:10:15.752 } 00:10:15.752 Got JSON-RPC error response 00:10:15.752 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:15.752 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'BgX(%"H|rrd{zjvX9o3`W' 00:10:16.011 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'BgX(%"H|rrd{zjvX9o3`W' nqn.2016-06.io.spdk:cnode31770 00:10:16.270 [2024-07-24 17:57:23.020464] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31770: invalid serial number 'BgX(%"H|rrd{zjvX9o3`W' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/24 17:57:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31770 serial_number:BgX(%"H|rrd{zjvX9o3`W], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN BgX(%"H|rrd{zjvX9o3`W 00:10:16.270 request: 00:10:16.270 { 00:10:16.270 "method": "nvmf_create_subsystem", 00:10:16.270 "params": { 00:10:16.270 "nqn": "nqn.2016-06.io.spdk:cnode31770", 00:10:16.270 "serial_number": "BgX(%\"H|rrd{zjvX9o3`W" 00:10:16.270 } 00:10:16.270 } 00:10:16.270 Got JSON-RPC error response 00:10:16.270 GoRPCClient: error on JSON-RPC call' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/24 17:57:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31770 serial_number:BgX(%"H|rrd{zjvX9o3`W], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN BgX(%"H|rrd{zjvX9o3`W 00:10:16.270 request: 00:10:16.270 { 00:10:16.270 "method": "nvmf_create_subsystem", 00:10:16.270 "params": { 00:10:16.270 "nqn": "nqn.2016-06.io.spdk:cnode31770", 00:10:16.270 "serial_number": "BgX(%\"H|rrd{zjvX9o3`W" 00:10:16.270 } 00:10:16.270 } 00:10:16.270 Got JSON-RPC error response 00:10:16.270 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.270 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.271 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.529 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:16.529 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:16.529 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:10:16.530 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'j(q88qHn /hC`7 /dev/null' 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:19.433 ************************************ 00:10:19.433 END TEST nvmf_invalid 00:10:19.433 ************************************ 00:10:19.433 00:10:19.433 real 0m5.842s 00:10:19.433 user 0m23.271s 00:10:19.433 sys 0m1.424s 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.433 ************************************ 00:10:19.433 START TEST nvmf_connect_stress 00:10:19.433 ************************************ 00:10:19.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:19.731 * Looking for test storage... 00:10:19.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.731 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:19.732 Cannot find device "nvmf_tgt_br" 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.732 Cannot find device "nvmf_tgt_br2" 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:19.732 Cannot find device "nvmf_tgt_br" 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:19.732 Cannot find device "nvmf_tgt_br2" 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:19.732 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:19.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:19.991 00:10:19.991 --- 10.0.0.2 ping statistics --- 00:10:19.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.991 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:19.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:19.991 00:10:19.991 --- 10.0.0.3 ping statistics --- 00:10:19.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.991 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:19.991 00:10:19.991 --- 10.0.0.1 ping statistics --- 00:10:19.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.991 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=75706 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 75706 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 75706 ']' 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.991 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.991 [2024-07-24 17:57:26.936239] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:10:19.992 [2024-07-24 17:57:26.936350] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.250 [2024-07-24 17:57:27.073343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.250 [2024-07-24 17:57:27.182232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.250 [2024-07-24 17:57:27.182526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.250 [2024-07-24 17:57:27.182629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.250 [2024-07-24 17:57:27.182680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.250 [2024-07-24 17:57:27.182708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.250 [2024-07-24 17:57:27.183181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.250 [2024-07-24 17:57:27.183793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.250 [2024-07-24 17:57:27.183793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.186 [2024-07-24 17:57:27.925670] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.186 [2024-07-24 17:57:27.949881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.186 NULL1 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75757 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.186 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.187 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.446 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.446 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:21.446 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.446 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.446 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.014 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.014 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:22.014 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.014 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.014 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.288 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.288 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:22.288 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.288 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.288 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.610 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.610 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:22.610 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.610 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.610 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.869 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.869 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:22.869 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.869 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.869 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.127 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.127 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:23.127 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.127 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.127 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.385 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.385 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:23.385 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.385 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.385 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.644 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.644 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:23.644 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.644 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.644 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.212 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.212 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:24.212 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.212 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.212 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.470 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.470 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:24.470 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.470 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.470 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.730 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.730 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:24.730 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.730 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.730 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.988 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.988 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:24.988 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.988 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.988 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.247 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.247 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:25.247 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.247 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.247 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.815 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.815 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:25.815 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.815 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.815 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.074 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.074 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:26.074 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.074 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.074 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.333 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.333 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:26.333 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.333 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.333 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.592 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.592 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:26.592 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.592 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.592 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.159 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.159 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:27.159 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.159 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.159 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.418 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.418 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:27.418 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.418 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.418 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.717 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:27.717 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.717 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.717 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.976 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.976 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:27.976 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.976 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.976 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.235 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.235 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:28.235 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.235 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.235 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.494 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.494 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:28.494 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.494 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.494 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.060 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.060 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:29.060 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.060 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.060 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.319 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.319 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:29.319 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.319 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.319 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.578 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.578 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:29.578 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.578 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.578 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.837 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.837 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:29.837 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.837 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.837 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.095 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.095 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:30.095 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.095 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.095 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.662 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.662 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:30.662 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.662 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.662 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.920 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:30.920 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.920 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.920 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.177 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.177 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:31.177 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.177 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.177 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.177 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75757 00:10:31.436 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75757) - No such process 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75757 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.436 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.436 rmmod nvme_tcp 00:10:31.436 rmmod nvme_fabrics 00:10:31.695 rmmod nvme_keyring 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 75706 ']' 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 75706 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 75706 ']' 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 75706 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75706 00:10:31.695 killing process with pid 75706 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75706' 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 75706 00:10:31.695 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 75706 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:31.955 00:10:31.955 real 0m12.441s 00:10:31.955 user 0m40.072s 00:10:31.955 sys 0m4.338s 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.955 ************************************ 00:10:31.955 END TEST nvmf_connect_stress 00:10:31.955 ************************************ 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:31.955 ************************************ 00:10:31.955 START TEST nvmf_fused_ordering 00:10:31.955 ************************************ 00:10:31.955 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:32.215 * Looking for test storage... 00:10:32.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:32.215 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.215 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:32.215 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.215 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.215 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.215 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.215 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.215 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:32.216 Cannot find device "nvmf_tgt_br" 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.216 Cannot find device "nvmf_tgt_br2" 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:32.216 Cannot find device "nvmf_tgt_br" 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:32.216 Cannot find device "nvmf_tgt_br2" 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:32.216 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:32.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:32.554 00:10:32.554 --- 10.0.0.2 ping statistics --- 00:10:32.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.554 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:32.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:32.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:10:32.554 00:10:32.554 --- 10.0.0.3 ping statistics --- 00:10:32.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.554 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:32.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:32.554 00:10:32.554 --- 10.0.0.1 ping statistics --- 00:10:32.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.554 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:32.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=76080 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 76080 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 76080 ']' 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.554 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:32.814 [2024-07-24 17:57:39.512706] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:10:32.814 [2024-07-24 17:57:39.513040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.814 [2024-07-24 17:57:39.658110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.814 [2024-07-24 17:57:39.765704] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.814 [2024-07-24 17:57:39.765998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.814 [2024-07-24 17:57:39.766017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.814 [2024-07-24 17:57:39.766027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.814 [2024-07-24 17:57:39.766036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.814 [2024-07-24 17:57:39.766078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:33.751 [2024-07-24 17:57:40.523622] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:33.751 [2024-07-24 17:57:40.539771] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:33.751 NULL1 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.751 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:33.751 [2024-07-24 17:57:40.590257] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:10:33.751 [2024-07-24 17:57:40.590313] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76130 ] 00:10:34.319 Attached to nqn.2016-06.io.spdk:cnode1 00:10:34.319 Namespace ID: 1 size: 1GB 00:10:34.319 fused_ordering(0) 00:10:34.319 fused_ordering(1) 00:10:34.319 fused_ordering(2) 00:10:34.319 fused_ordering(3) 00:10:34.319 fused_ordering(4) 00:10:34.319 fused_ordering(5) 00:10:34.319 fused_ordering(6) 00:10:34.319 fused_ordering(7) 00:10:34.319 fused_ordering(8) 00:10:34.319 fused_ordering(9) 00:10:34.319 fused_ordering(10) 00:10:34.319 fused_ordering(11) 00:10:34.319 fused_ordering(12) 00:10:34.319 fused_ordering(13) 00:10:34.319 fused_ordering(14) 00:10:34.319 fused_ordering(15) 00:10:34.319 fused_ordering(16) 00:10:34.319 fused_ordering(17) 00:10:34.319 fused_ordering(18) 00:10:34.319 fused_ordering(19) 00:10:34.319 fused_ordering(20) 00:10:34.319 fused_ordering(21) 00:10:34.319 fused_ordering(22) 00:10:34.319 fused_ordering(23) 00:10:34.319 fused_ordering(24) 00:10:34.319 fused_ordering(25) 00:10:34.319 fused_ordering(26) 00:10:34.319 fused_ordering(27) 00:10:34.319 fused_ordering(28) 00:10:34.319 fused_ordering(29) 00:10:34.319 fused_ordering(30) 00:10:34.319 fused_ordering(31) 00:10:34.319 fused_ordering(32) 00:10:34.319 fused_ordering(33) 00:10:34.319 fused_ordering(34) 00:10:34.319 fused_ordering(35) 00:10:34.319 fused_ordering(36) 00:10:34.319 fused_ordering(37) 00:10:34.319 fused_ordering(38) 00:10:34.319 fused_ordering(39) 00:10:34.319 fused_ordering(40) 00:10:34.319 fused_ordering(41) 00:10:34.319 fused_ordering(42) 00:10:34.319 fused_ordering(43) 00:10:34.319 fused_ordering(44) 00:10:34.319 fused_ordering(45) 00:10:34.319 fused_ordering(46) 00:10:34.319 fused_ordering(47) 00:10:34.319 fused_ordering(48) 00:10:34.319 fused_ordering(49) 00:10:34.319 fused_ordering(50) 00:10:34.319 fused_ordering(51) 00:10:34.319 fused_ordering(52) 00:10:34.319 fused_ordering(53) 00:10:34.319 fused_ordering(54) 00:10:34.319 fused_ordering(55) 00:10:34.319 fused_ordering(56) 00:10:34.319 fused_ordering(57) 00:10:34.319 fused_ordering(58) 00:10:34.319 fused_ordering(59) 00:10:34.319 fused_ordering(60) 00:10:34.319 fused_ordering(61) 00:10:34.319 fused_ordering(62) 00:10:34.319 fused_ordering(63) 00:10:34.319 fused_ordering(64) 00:10:34.319 fused_ordering(65) 00:10:34.319 fused_ordering(66) 00:10:34.320 fused_ordering(67) 00:10:34.320 fused_ordering(68) 00:10:34.320 fused_ordering(69) 00:10:34.320 fused_ordering(70) 00:10:34.320 fused_ordering(71) 00:10:34.320 fused_ordering(72) 00:10:34.320 fused_ordering(73) 00:10:34.320 fused_ordering(74) 00:10:34.320 fused_ordering(75) 00:10:34.320 fused_ordering(76) 00:10:34.320 fused_ordering(77) 00:10:34.320 fused_ordering(78) 00:10:34.320 fused_ordering(79) 00:10:34.320 fused_ordering(80) 00:10:34.320 fused_ordering(81) 00:10:34.320 fused_ordering(82) 00:10:34.320 fused_ordering(83) 00:10:34.320 fused_ordering(84) 00:10:34.320 fused_ordering(85) 00:10:34.320 fused_ordering(86) 00:10:34.320 fused_ordering(87) 00:10:34.320 fused_ordering(88) 00:10:34.320 fused_ordering(89) 00:10:34.320 fused_ordering(90) 00:10:34.320 fused_ordering(91) 00:10:34.320 fused_ordering(92) 00:10:34.320 fused_ordering(93) 00:10:34.320 fused_ordering(94) 00:10:34.320 fused_ordering(95) 00:10:34.320 fused_ordering(96) 00:10:34.320 fused_ordering(97) 00:10:34.320 fused_ordering(98) 00:10:34.320 fused_ordering(99) 00:10:34.320 fused_ordering(100) 00:10:34.320 fused_ordering(101) 00:10:34.320 fused_ordering(102) 00:10:34.320 fused_ordering(103) 00:10:34.320 fused_ordering(104) 00:10:34.320 fused_ordering(105) 00:10:34.320 fused_ordering(106) 00:10:34.320 fused_ordering(107) 00:10:34.320 fused_ordering(108) 00:10:34.320 fused_ordering(109) 00:10:34.320 fused_ordering(110) 00:10:34.320 fused_ordering(111) 00:10:34.320 fused_ordering(112) 00:10:34.320 fused_ordering(113) 00:10:34.320 fused_ordering(114) 00:10:34.320 fused_ordering(115) 00:10:34.320 fused_ordering(116) 00:10:34.320 fused_ordering(117) 00:10:34.320 fused_ordering(118) 00:10:34.320 fused_ordering(119) 00:10:34.320 fused_ordering(120) 00:10:34.320 fused_ordering(121) 00:10:34.320 fused_ordering(122) 00:10:34.320 fused_ordering(123) 00:10:34.320 fused_ordering(124) 00:10:34.320 fused_ordering(125) 00:10:34.320 fused_ordering(126) 00:10:34.320 fused_ordering(127) 00:10:34.320 fused_ordering(128) 00:10:34.320 fused_ordering(129) 00:10:34.320 fused_ordering(130) 00:10:34.320 fused_ordering(131) 00:10:34.320 fused_ordering(132) 00:10:34.320 fused_ordering(133) 00:10:34.320 fused_ordering(134) 00:10:34.320 fused_ordering(135) 00:10:34.320 fused_ordering(136) 00:10:34.320 fused_ordering(137) 00:10:34.320 fused_ordering(138) 00:10:34.320 fused_ordering(139) 00:10:34.320 fused_ordering(140) 00:10:34.320 fused_ordering(141) 00:10:34.320 fused_ordering(142) 00:10:34.320 fused_ordering(143) 00:10:34.320 fused_ordering(144) 00:10:34.320 fused_ordering(145) 00:10:34.320 fused_ordering(146) 00:10:34.320 fused_ordering(147) 00:10:34.320 fused_ordering(148) 00:10:34.320 fused_ordering(149) 00:10:34.320 fused_ordering(150) 00:10:34.320 fused_ordering(151) 00:10:34.320 fused_ordering(152) 00:10:34.320 fused_ordering(153) 00:10:34.320 fused_ordering(154) 00:10:34.320 fused_ordering(155) 00:10:34.320 fused_ordering(156) 00:10:34.320 fused_ordering(157) 00:10:34.320 fused_ordering(158) 00:10:34.320 fused_ordering(159) 00:10:34.320 fused_ordering(160) 00:10:34.320 fused_ordering(161) 00:10:34.320 fused_ordering(162) 00:10:34.320 fused_ordering(163) 00:10:34.320 fused_ordering(164) 00:10:34.320 fused_ordering(165) 00:10:34.320 fused_ordering(166) 00:10:34.320 fused_ordering(167) 00:10:34.320 fused_ordering(168) 00:10:34.320 fused_ordering(169) 00:10:34.320 fused_ordering(170) 00:10:34.320 fused_ordering(171) 00:10:34.320 fused_ordering(172) 00:10:34.320 fused_ordering(173) 00:10:34.320 fused_ordering(174) 00:10:34.320 fused_ordering(175) 00:10:34.320 fused_ordering(176) 00:10:34.320 fused_ordering(177) 00:10:34.320 fused_ordering(178) 00:10:34.320 fused_ordering(179) 00:10:34.320 fused_ordering(180) 00:10:34.320 fused_ordering(181) 00:10:34.320 fused_ordering(182) 00:10:34.320 fused_ordering(183) 00:10:34.320 fused_ordering(184) 00:10:34.320 fused_ordering(185) 00:10:34.320 fused_ordering(186) 00:10:34.320 fused_ordering(187) 00:10:34.320 fused_ordering(188) 00:10:34.320 fused_ordering(189) 00:10:34.320 fused_ordering(190) 00:10:34.320 fused_ordering(191) 00:10:34.320 fused_ordering(192) 00:10:34.320 fused_ordering(193) 00:10:34.320 fused_ordering(194) 00:10:34.320 fused_ordering(195) 00:10:34.320 fused_ordering(196) 00:10:34.320 fused_ordering(197) 00:10:34.320 fused_ordering(198) 00:10:34.320 fused_ordering(199) 00:10:34.320 fused_ordering(200) 00:10:34.320 fused_ordering(201) 00:10:34.320 fused_ordering(202) 00:10:34.320 fused_ordering(203) 00:10:34.320 fused_ordering(204) 00:10:34.320 fused_ordering(205) 00:10:34.579 fused_ordering(206) 00:10:34.579 fused_ordering(207) 00:10:34.579 fused_ordering(208) 00:10:34.579 fused_ordering(209) 00:10:34.579 fused_ordering(210) 00:10:34.579 fused_ordering(211) 00:10:34.579 fused_ordering(212) 00:10:34.579 fused_ordering(213) 00:10:34.579 fused_ordering(214) 00:10:34.579 fused_ordering(215) 00:10:34.579 fused_ordering(216) 00:10:34.579 fused_ordering(217) 00:10:34.580 fused_ordering(218) 00:10:34.580 fused_ordering(219) 00:10:34.580 fused_ordering(220) 00:10:34.580 fused_ordering(221) 00:10:34.580 fused_ordering(222) 00:10:34.580 fused_ordering(223) 00:10:34.580 fused_ordering(224) 00:10:34.580 fused_ordering(225) 00:10:34.580 fused_ordering(226) 00:10:34.580 fused_ordering(227) 00:10:34.580 fused_ordering(228) 00:10:34.580 fused_ordering(229) 00:10:34.580 fused_ordering(230) 00:10:34.580 fused_ordering(231) 00:10:34.580 fused_ordering(232) 00:10:34.580 fused_ordering(233) 00:10:34.580 fused_ordering(234) 00:10:34.580 fused_ordering(235) 00:10:34.580 fused_ordering(236) 00:10:34.580 fused_ordering(237) 00:10:34.580 fused_ordering(238) 00:10:34.580 fused_ordering(239) 00:10:34.580 fused_ordering(240) 00:10:34.580 fused_ordering(241) 00:10:34.580 fused_ordering(242) 00:10:34.580 fused_ordering(243) 00:10:34.580 fused_ordering(244) 00:10:34.580 fused_ordering(245) 00:10:34.580 fused_ordering(246) 00:10:34.580 fused_ordering(247) 00:10:34.580 fused_ordering(248) 00:10:34.580 fused_ordering(249) 00:10:34.580 fused_ordering(250) 00:10:34.580 fused_ordering(251) 00:10:34.580 fused_ordering(252) 00:10:34.580 fused_ordering(253) 00:10:34.580 fused_ordering(254) 00:10:34.580 fused_ordering(255) 00:10:34.580 fused_ordering(256) 00:10:34.580 fused_ordering(257) 00:10:34.580 fused_ordering(258) 00:10:34.580 fused_ordering(259) 00:10:34.580 fused_ordering(260) 00:10:34.580 fused_ordering(261) 00:10:34.580 fused_ordering(262) 00:10:34.580 fused_ordering(263) 00:10:34.580 fused_ordering(264) 00:10:34.580 fused_ordering(265) 00:10:34.580 fused_ordering(266) 00:10:34.580 fused_ordering(267) 00:10:34.580 fused_ordering(268) 00:10:34.580 fused_ordering(269) 00:10:34.580 fused_ordering(270) 00:10:34.580 fused_ordering(271) 00:10:34.580 fused_ordering(272) 00:10:34.580 fused_ordering(273) 00:10:34.580 fused_ordering(274) 00:10:34.580 fused_ordering(275) 00:10:34.580 fused_ordering(276) 00:10:34.580 fused_ordering(277) 00:10:34.580 fused_ordering(278) 00:10:34.580 fused_ordering(279) 00:10:34.580 fused_ordering(280) 00:10:34.580 fused_ordering(281) 00:10:34.580 fused_ordering(282) 00:10:34.580 fused_ordering(283) 00:10:34.580 fused_ordering(284) 00:10:34.580 fused_ordering(285) 00:10:34.580 fused_ordering(286) 00:10:34.580 fused_ordering(287) 00:10:34.580 fused_ordering(288) 00:10:34.580 fused_ordering(289) 00:10:34.580 fused_ordering(290) 00:10:34.580 fused_ordering(291) 00:10:34.580 fused_ordering(292) 00:10:34.580 fused_ordering(293) 00:10:34.580 fused_ordering(294) 00:10:34.580 fused_ordering(295) 00:10:34.580 fused_ordering(296) 00:10:34.580 fused_ordering(297) 00:10:34.580 fused_ordering(298) 00:10:34.580 fused_ordering(299) 00:10:34.580 fused_ordering(300) 00:10:34.580 fused_ordering(301) 00:10:34.580 fused_ordering(302) 00:10:34.580 fused_ordering(303) 00:10:34.580 fused_ordering(304) 00:10:34.580 fused_ordering(305) 00:10:34.580 fused_ordering(306) 00:10:34.580 fused_ordering(307) 00:10:34.580 fused_ordering(308) 00:10:34.580 fused_ordering(309) 00:10:34.580 fused_ordering(310) 00:10:34.580 fused_ordering(311) 00:10:34.580 fused_ordering(312) 00:10:34.580 fused_ordering(313) 00:10:34.580 fused_ordering(314) 00:10:34.580 fused_ordering(315) 00:10:34.580 fused_ordering(316) 00:10:34.580 fused_ordering(317) 00:10:34.580 fused_ordering(318) 00:10:34.580 fused_ordering(319) 00:10:34.580 fused_ordering(320) 00:10:34.580 fused_ordering(321) 00:10:34.580 fused_ordering(322) 00:10:34.580 fused_ordering(323) 00:10:34.580 fused_ordering(324) 00:10:34.580 fused_ordering(325) 00:10:34.580 fused_ordering(326) 00:10:34.580 fused_ordering(327) 00:10:34.580 fused_ordering(328) 00:10:34.580 fused_ordering(329) 00:10:34.580 fused_ordering(330) 00:10:34.580 fused_ordering(331) 00:10:34.580 fused_ordering(332) 00:10:34.580 fused_ordering(333) 00:10:34.580 fused_ordering(334) 00:10:34.580 fused_ordering(335) 00:10:34.580 fused_ordering(336) 00:10:34.580 fused_ordering(337) 00:10:34.580 fused_ordering(338) 00:10:34.580 fused_ordering(339) 00:10:34.580 fused_ordering(340) 00:10:34.580 fused_ordering(341) 00:10:34.580 fused_ordering(342) 00:10:34.580 fused_ordering(343) 00:10:34.580 fused_ordering(344) 00:10:34.580 fused_ordering(345) 00:10:34.580 fused_ordering(346) 00:10:34.580 fused_ordering(347) 00:10:34.580 fused_ordering(348) 00:10:34.580 fused_ordering(349) 00:10:34.580 fused_ordering(350) 00:10:34.580 fused_ordering(351) 00:10:34.580 fused_ordering(352) 00:10:34.580 fused_ordering(353) 00:10:34.580 fused_ordering(354) 00:10:34.580 fused_ordering(355) 00:10:34.580 fused_ordering(356) 00:10:34.580 fused_ordering(357) 00:10:34.580 fused_ordering(358) 00:10:34.580 fused_ordering(359) 00:10:34.580 fused_ordering(360) 00:10:34.580 fused_ordering(361) 00:10:34.580 fused_ordering(362) 00:10:34.580 fused_ordering(363) 00:10:34.580 fused_ordering(364) 00:10:34.580 fused_ordering(365) 00:10:34.580 fused_ordering(366) 00:10:34.580 fused_ordering(367) 00:10:34.580 fused_ordering(368) 00:10:34.580 fused_ordering(369) 00:10:34.580 fused_ordering(370) 00:10:34.580 fused_ordering(371) 00:10:34.580 fused_ordering(372) 00:10:34.580 fused_ordering(373) 00:10:34.580 fused_ordering(374) 00:10:34.580 fused_ordering(375) 00:10:34.580 fused_ordering(376) 00:10:34.580 fused_ordering(377) 00:10:34.580 fused_ordering(378) 00:10:34.580 fused_ordering(379) 00:10:34.580 fused_ordering(380) 00:10:34.580 fused_ordering(381) 00:10:34.580 fused_ordering(382) 00:10:34.580 fused_ordering(383) 00:10:34.580 fused_ordering(384) 00:10:34.580 fused_ordering(385) 00:10:34.580 fused_ordering(386) 00:10:34.580 fused_ordering(387) 00:10:34.580 fused_ordering(388) 00:10:34.580 fused_ordering(389) 00:10:34.580 fused_ordering(390) 00:10:34.580 fused_ordering(391) 00:10:34.580 fused_ordering(392) 00:10:34.580 fused_ordering(393) 00:10:34.580 fused_ordering(394) 00:10:34.580 fused_ordering(395) 00:10:34.580 fused_ordering(396) 00:10:34.580 fused_ordering(397) 00:10:34.580 fused_ordering(398) 00:10:34.580 fused_ordering(399) 00:10:34.580 fused_ordering(400) 00:10:34.580 fused_ordering(401) 00:10:34.580 fused_ordering(402) 00:10:34.580 fused_ordering(403) 00:10:34.580 fused_ordering(404) 00:10:34.580 fused_ordering(405) 00:10:34.580 fused_ordering(406) 00:10:34.580 fused_ordering(407) 00:10:34.580 fused_ordering(408) 00:10:34.580 fused_ordering(409) 00:10:34.580 fused_ordering(410) 00:10:34.838 fused_ordering(411) 00:10:34.838 fused_ordering(412) 00:10:34.838 fused_ordering(413) 00:10:34.838 fused_ordering(414) 00:10:34.838 fused_ordering(415) 00:10:34.838 fused_ordering(416) 00:10:34.838 fused_ordering(417) 00:10:34.838 fused_ordering(418) 00:10:34.838 fused_ordering(419) 00:10:34.838 fused_ordering(420) 00:10:34.838 fused_ordering(421) 00:10:34.838 fused_ordering(422) 00:10:34.838 fused_ordering(423) 00:10:34.838 fused_ordering(424) 00:10:34.838 fused_ordering(425) 00:10:34.838 fused_ordering(426) 00:10:34.838 fused_ordering(427) 00:10:34.838 fused_ordering(428) 00:10:34.838 fused_ordering(429) 00:10:34.838 fused_ordering(430) 00:10:34.838 fused_ordering(431) 00:10:34.838 fused_ordering(432) 00:10:34.838 fused_ordering(433) 00:10:34.838 fused_ordering(434) 00:10:34.838 fused_ordering(435) 00:10:34.838 fused_ordering(436) 00:10:34.838 fused_ordering(437) 00:10:34.838 fused_ordering(438) 00:10:34.838 fused_ordering(439) 00:10:34.838 fused_ordering(440) 00:10:34.839 fused_ordering(441) 00:10:34.839 fused_ordering(442) 00:10:34.839 fused_ordering(443) 00:10:34.839 fused_ordering(444) 00:10:34.839 fused_ordering(445) 00:10:34.839 fused_ordering(446) 00:10:34.839 fused_ordering(447) 00:10:34.839 fused_ordering(448) 00:10:34.839 fused_ordering(449) 00:10:34.839 fused_ordering(450) 00:10:34.839 fused_ordering(451) 00:10:34.839 fused_ordering(452) 00:10:34.839 fused_ordering(453) 00:10:34.839 fused_ordering(454) 00:10:34.839 fused_ordering(455) 00:10:34.839 fused_ordering(456) 00:10:34.839 fused_ordering(457) 00:10:34.839 fused_ordering(458) 00:10:34.839 fused_ordering(459) 00:10:34.839 fused_ordering(460) 00:10:34.839 fused_ordering(461) 00:10:34.839 fused_ordering(462) 00:10:34.839 fused_ordering(463) 00:10:34.839 fused_ordering(464) 00:10:34.839 fused_ordering(465) 00:10:34.839 fused_ordering(466) 00:10:34.839 fused_ordering(467) 00:10:34.839 fused_ordering(468) 00:10:34.839 fused_ordering(469) 00:10:34.839 fused_ordering(470) 00:10:34.839 fused_ordering(471) 00:10:34.839 fused_ordering(472) 00:10:34.839 fused_ordering(473) 00:10:34.839 fused_ordering(474) 00:10:34.839 fused_ordering(475) 00:10:34.839 fused_ordering(476) 00:10:34.839 fused_ordering(477) 00:10:34.839 fused_ordering(478) 00:10:34.839 fused_ordering(479) 00:10:34.839 fused_ordering(480) 00:10:34.839 fused_ordering(481) 00:10:34.839 fused_ordering(482) 00:10:34.839 fused_ordering(483) 00:10:34.839 fused_ordering(484) 00:10:34.839 fused_ordering(485) 00:10:34.839 fused_ordering(486) 00:10:34.839 fused_ordering(487) 00:10:34.839 fused_ordering(488) 00:10:34.839 fused_ordering(489) 00:10:34.839 fused_ordering(490) 00:10:34.839 fused_ordering(491) 00:10:34.839 fused_ordering(492) 00:10:34.839 fused_ordering(493) 00:10:34.839 fused_ordering(494) 00:10:34.839 fused_ordering(495) 00:10:34.839 fused_ordering(496) 00:10:34.839 fused_ordering(497) 00:10:34.839 fused_ordering(498) 00:10:34.839 fused_ordering(499) 00:10:34.839 fused_ordering(500) 00:10:34.839 fused_ordering(501) 00:10:34.839 fused_ordering(502) 00:10:34.839 fused_ordering(503) 00:10:34.839 fused_ordering(504) 00:10:34.839 fused_ordering(505) 00:10:34.839 fused_ordering(506) 00:10:34.839 fused_ordering(507) 00:10:34.839 fused_ordering(508) 00:10:34.839 fused_ordering(509) 00:10:34.839 fused_ordering(510) 00:10:34.839 fused_ordering(511) 00:10:34.839 fused_ordering(512) 00:10:34.839 fused_ordering(513) 00:10:34.839 fused_ordering(514) 00:10:34.839 fused_ordering(515) 00:10:34.839 fused_ordering(516) 00:10:34.839 fused_ordering(517) 00:10:34.839 fused_ordering(518) 00:10:34.839 fused_ordering(519) 00:10:34.839 fused_ordering(520) 00:10:34.839 fused_ordering(521) 00:10:34.839 fused_ordering(522) 00:10:34.839 fused_ordering(523) 00:10:34.839 fused_ordering(524) 00:10:34.839 fused_ordering(525) 00:10:34.839 fused_ordering(526) 00:10:34.839 fused_ordering(527) 00:10:34.839 fused_ordering(528) 00:10:34.839 fused_ordering(529) 00:10:34.839 fused_ordering(530) 00:10:34.839 fused_ordering(531) 00:10:34.839 fused_ordering(532) 00:10:34.839 fused_ordering(533) 00:10:34.839 fused_ordering(534) 00:10:34.839 fused_ordering(535) 00:10:34.839 fused_ordering(536) 00:10:34.839 fused_ordering(537) 00:10:34.839 fused_ordering(538) 00:10:34.839 fused_ordering(539) 00:10:34.839 fused_ordering(540) 00:10:34.839 fused_ordering(541) 00:10:34.839 fused_ordering(542) 00:10:34.839 fused_ordering(543) 00:10:34.839 fused_ordering(544) 00:10:34.839 fused_ordering(545) 00:10:34.839 fused_ordering(546) 00:10:34.839 fused_ordering(547) 00:10:34.839 fused_ordering(548) 00:10:34.839 fused_ordering(549) 00:10:34.839 fused_ordering(550) 00:10:34.839 fused_ordering(551) 00:10:34.839 fused_ordering(552) 00:10:34.839 fused_ordering(553) 00:10:34.839 fused_ordering(554) 00:10:34.839 fused_ordering(555) 00:10:34.839 fused_ordering(556) 00:10:34.839 fused_ordering(557) 00:10:34.839 fused_ordering(558) 00:10:34.839 fused_ordering(559) 00:10:34.839 fused_ordering(560) 00:10:34.839 fused_ordering(561) 00:10:34.839 fused_ordering(562) 00:10:34.839 fused_ordering(563) 00:10:34.839 fused_ordering(564) 00:10:34.839 fused_ordering(565) 00:10:34.839 fused_ordering(566) 00:10:34.839 fused_ordering(567) 00:10:34.839 fused_ordering(568) 00:10:34.839 fused_ordering(569) 00:10:34.839 fused_ordering(570) 00:10:34.839 fused_ordering(571) 00:10:34.839 fused_ordering(572) 00:10:34.839 fused_ordering(573) 00:10:34.839 fused_ordering(574) 00:10:34.839 fused_ordering(575) 00:10:34.839 fused_ordering(576) 00:10:34.839 fused_ordering(577) 00:10:34.839 fused_ordering(578) 00:10:34.839 fused_ordering(579) 00:10:34.839 fused_ordering(580) 00:10:34.839 fused_ordering(581) 00:10:34.839 fused_ordering(582) 00:10:34.839 fused_ordering(583) 00:10:34.839 fused_ordering(584) 00:10:34.839 fused_ordering(585) 00:10:34.839 fused_ordering(586) 00:10:34.839 fused_ordering(587) 00:10:34.839 fused_ordering(588) 00:10:34.839 fused_ordering(589) 00:10:34.839 fused_ordering(590) 00:10:34.839 fused_ordering(591) 00:10:34.839 fused_ordering(592) 00:10:34.839 fused_ordering(593) 00:10:34.839 fused_ordering(594) 00:10:34.839 fused_ordering(595) 00:10:34.839 fused_ordering(596) 00:10:34.839 fused_ordering(597) 00:10:34.839 fused_ordering(598) 00:10:34.839 fused_ordering(599) 00:10:34.839 fused_ordering(600) 00:10:34.839 fused_ordering(601) 00:10:34.839 fused_ordering(602) 00:10:34.839 fused_ordering(603) 00:10:34.839 fused_ordering(604) 00:10:34.839 fused_ordering(605) 00:10:34.839 fused_ordering(606) 00:10:34.839 fused_ordering(607) 00:10:34.839 fused_ordering(608) 00:10:34.839 fused_ordering(609) 00:10:34.839 fused_ordering(610) 00:10:34.839 fused_ordering(611) 00:10:34.839 fused_ordering(612) 00:10:34.839 fused_ordering(613) 00:10:34.839 fused_ordering(614) 00:10:34.839 fused_ordering(615) 00:10:35.407 fused_ordering(616) 00:10:35.407 fused_ordering(617) 00:10:35.407 fused_ordering(618) 00:10:35.407 fused_ordering(619) 00:10:35.407 fused_ordering(620) 00:10:35.407 fused_ordering(621) 00:10:35.407 fused_ordering(622) 00:10:35.407 fused_ordering(623) 00:10:35.407 fused_ordering(624) 00:10:35.407 fused_ordering(625) 00:10:35.407 fused_ordering(626) 00:10:35.407 fused_ordering(627) 00:10:35.407 fused_ordering(628) 00:10:35.407 fused_ordering(629) 00:10:35.407 fused_ordering(630) 00:10:35.407 fused_ordering(631) 00:10:35.407 fused_ordering(632) 00:10:35.407 fused_ordering(633) 00:10:35.407 fused_ordering(634) 00:10:35.407 fused_ordering(635) 00:10:35.407 fused_ordering(636) 00:10:35.407 fused_ordering(637) 00:10:35.407 fused_ordering(638) 00:10:35.407 fused_ordering(639) 00:10:35.407 fused_ordering(640) 00:10:35.407 fused_ordering(641) 00:10:35.407 fused_ordering(642) 00:10:35.407 fused_ordering(643) 00:10:35.407 fused_ordering(644) 00:10:35.407 fused_ordering(645) 00:10:35.407 fused_ordering(646) 00:10:35.407 fused_ordering(647) 00:10:35.407 fused_ordering(648) 00:10:35.407 fused_ordering(649) 00:10:35.407 fused_ordering(650) 00:10:35.407 fused_ordering(651) 00:10:35.407 fused_ordering(652) 00:10:35.407 fused_ordering(653) 00:10:35.407 fused_ordering(654) 00:10:35.407 fused_ordering(655) 00:10:35.407 fused_ordering(656) 00:10:35.407 fused_ordering(657) 00:10:35.407 fused_ordering(658) 00:10:35.407 fused_ordering(659) 00:10:35.407 fused_ordering(660) 00:10:35.407 fused_ordering(661) 00:10:35.407 fused_ordering(662) 00:10:35.407 fused_ordering(663) 00:10:35.407 fused_ordering(664) 00:10:35.407 fused_ordering(665) 00:10:35.407 fused_ordering(666) 00:10:35.407 fused_ordering(667) 00:10:35.407 fused_ordering(668) 00:10:35.407 fused_ordering(669) 00:10:35.407 fused_ordering(670) 00:10:35.407 fused_ordering(671) 00:10:35.407 fused_ordering(672) 00:10:35.407 fused_ordering(673) 00:10:35.407 fused_ordering(674) 00:10:35.407 fused_ordering(675) 00:10:35.407 fused_ordering(676) 00:10:35.407 fused_ordering(677) 00:10:35.407 fused_ordering(678) 00:10:35.407 fused_ordering(679) 00:10:35.407 fused_ordering(680) 00:10:35.407 fused_ordering(681) 00:10:35.407 fused_ordering(682) 00:10:35.407 fused_ordering(683) 00:10:35.407 fused_ordering(684) 00:10:35.407 fused_ordering(685) 00:10:35.407 fused_ordering(686) 00:10:35.407 fused_ordering(687) 00:10:35.407 fused_ordering(688) 00:10:35.407 fused_ordering(689) 00:10:35.407 fused_ordering(690) 00:10:35.407 fused_ordering(691) 00:10:35.407 fused_ordering(692) 00:10:35.407 fused_ordering(693) 00:10:35.407 fused_ordering(694) 00:10:35.407 fused_ordering(695) 00:10:35.407 fused_ordering(696) 00:10:35.407 fused_ordering(697) 00:10:35.407 fused_ordering(698) 00:10:35.407 fused_ordering(699) 00:10:35.407 fused_ordering(700) 00:10:35.407 fused_ordering(701) 00:10:35.407 fused_ordering(702) 00:10:35.407 fused_ordering(703) 00:10:35.407 fused_ordering(704) 00:10:35.407 fused_ordering(705) 00:10:35.407 fused_ordering(706) 00:10:35.407 fused_ordering(707) 00:10:35.407 fused_ordering(708) 00:10:35.407 fused_ordering(709) 00:10:35.407 fused_ordering(710) 00:10:35.407 fused_ordering(711) 00:10:35.407 fused_ordering(712) 00:10:35.407 fused_ordering(713) 00:10:35.407 fused_ordering(714) 00:10:35.407 fused_ordering(715) 00:10:35.407 fused_ordering(716) 00:10:35.407 fused_ordering(717) 00:10:35.407 fused_ordering(718) 00:10:35.407 fused_ordering(719) 00:10:35.407 fused_ordering(720) 00:10:35.407 fused_ordering(721) 00:10:35.407 fused_ordering(722) 00:10:35.407 fused_ordering(723) 00:10:35.407 fused_ordering(724) 00:10:35.407 fused_ordering(725) 00:10:35.407 fused_ordering(726) 00:10:35.407 fused_ordering(727) 00:10:35.407 fused_ordering(728) 00:10:35.407 fused_ordering(729) 00:10:35.407 fused_ordering(730) 00:10:35.407 fused_ordering(731) 00:10:35.407 fused_ordering(732) 00:10:35.407 fused_ordering(733) 00:10:35.407 fused_ordering(734) 00:10:35.407 fused_ordering(735) 00:10:35.407 fused_ordering(736) 00:10:35.407 fused_ordering(737) 00:10:35.407 fused_ordering(738) 00:10:35.407 fused_ordering(739) 00:10:35.408 fused_ordering(740) 00:10:35.408 fused_ordering(741) 00:10:35.408 fused_ordering(742) 00:10:35.408 fused_ordering(743) 00:10:35.408 fused_ordering(744) 00:10:35.408 fused_ordering(745) 00:10:35.408 fused_ordering(746) 00:10:35.408 fused_ordering(747) 00:10:35.408 fused_ordering(748) 00:10:35.408 fused_ordering(749) 00:10:35.408 fused_ordering(750) 00:10:35.408 fused_ordering(751) 00:10:35.408 fused_ordering(752) 00:10:35.408 fused_ordering(753) 00:10:35.408 fused_ordering(754) 00:10:35.408 fused_ordering(755) 00:10:35.408 fused_ordering(756) 00:10:35.408 fused_ordering(757) 00:10:35.408 fused_ordering(758) 00:10:35.408 fused_ordering(759) 00:10:35.408 fused_ordering(760) 00:10:35.408 fused_ordering(761) 00:10:35.408 fused_ordering(762) 00:10:35.408 fused_ordering(763) 00:10:35.408 fused_ordering(764) 00:10:35.408 fused_ordering(765) 00:10:35.408 fused_ordering(766) 00:10:35.408 fused_ordering(767) 00:10:35.408 fused_ordering(768) 00:10:35.408 fused_ordering(769) 00:10:35.408 fused_ordering(770) 00:10:35.408 fused_ordering(771) 00:10:35.408 fused_ordering(772) 00:10:35.408 fused_ordering(773) 00:10:35.408 fused_ordering(774) 00:10:35.408 fused_ordering(775) 00:10:35.408 fused_ordering(776) 00:10:35.408 fused_ordering(777) 00:10:35.408 fused_ordering(778) 00:10:35.408 fused_ordering(779) 00:10:35.408 fused_ordering(780) 00:10:35.408 fused_ordering(781) 00:10:35.408 fused_ordering(782) 00:10:35.408 fused_ordering(783) 00:10:35.408 fused_ordering(784) 00:10:35.408 fused_ordering(785) 00:10:35.408 fused_ordering(786) 00:10:35.408 fused_ordering(787) 00:10:35.408 fused_ordering(788) 00:10:35.408 fused_ordering(789) 00:10:35.408 fused_ordering(790) 00:10:35.408 fused_ordering(791) 00:10:35.408 fused_ordering(792) 00:10:35.408 fused_ordering(793) 00:10:35.408 fused_ordering(794) 00:10:35.408 fused_ordering(795) 00:10:35.408 fused_ordering(796) 00:10:35.408 fused_ordering(797) 00:10:35.408 fused_ordering(798) 00:10:35.408 fused_ordering(799) 00:10:35.408 fused_ordering(800) 00:10:35.408 fused_ordering(801) 00:10:35.408 fused_ordering(802) 00:10:35.408 fused_ordering(803) 00:10:35.408 fused_ordering(804) 00:10:35.408 fused_ordering(805) 00:10:35.408 fused_ordering(806) 00:10:35.408 fused_ordering(807) 00:10:35.408 fused_ordering(808) 00:10:35.408 fused_ordering(809) 00:10:35.408 fused_ordering(810) 00:10:35.408 fused_ordering(811) 00:10:35.408 fused_ordering(812) 00:10:35.408 fused_ordering(813) 00:10:35.408 fused_ordering(814) 00:10:35.408 fused_ordering(815) 00:10:35.408 fused_ordering(816) 00:10:35.408 fused_ordering(817) 00:10:35.408 fused_ordering(818) 00:10:35.408 fused_ordering(819) 00:10:35.408 fused_ordering(820) 00:10:36.342 fused_ordering(821) 00:10:36.342 fused_ordering(822) 00:10:36.342 fused_ordering(823) 00:10:36.342 fused_ordering(824) 00:10:36.342 fused_ordering(825) 00:10:36.342 fused_ordering(826) 00:10:36.342 fused_ordering(827) 00:10:36.342 fused_ordering(828) 00:10:36.342 fused_ordering(829) 00:10:36.342 fused_ordering(830) 00:10:36.342 fused_ordering(831) 00:10:36.342 fused_ordering(832) 00:10:36.342 fused_ordering(833) 00:10:36.342 fused_ordering(834) 00:10:36.342 fused_ordering(835) 00:10:36.342 fused_ordering(836) 00:10:36.342 fused_ordering(837) 00:10:36.342 fused_ordering(838) 00:10:36.342 fused_ordering(839) 00:10:36.342 fused_ordering(840) 00:10:36.342 fused_ordering(841) 00:10:36.342 fused_ordering(842) 00:10:36.342 fused_ordering(843) 00:10:36.342 fused_ordering(844) 00:10:36.342 fused_ordering(845) 00:10:36.342 fused_ordering(846) 00:10:36.342 fused_ordering(847) 00:10:36.342 fused_ordering(848) 00:10:36.342 fused_ordering(849) 00:10:36.342 fused_ordering(850) 00:10:36.342 fused_ordering(851) 00:10:36.342 fused_ordering(852) 00:10:36.342 fused_ordering(853) 00:10:36.342 fused_ordering(854) 00:10:36.342 fused_ordering(855) 00:10:36.342 fused_ordering(856) 00:10:36.342 fused_ordering(857) 00:10:36.342 fused_ordering(858) 00:10:36.342 fused_ordering(859) 00:10:36.342 fused_ordering(860) 00:10:36.342 fused_ordering(861) 00:10:36.342 fused_ordering(862) 00:10:36.342 fused_ordering(863) 00:10:36.342 fused_ordering(864) 00:10:36.342 fused_ordering(865) 00:10:36.342 fused_ordering(866) 00:10:36.342 fused_ordering(867) 00:10:36.342 fused_ordering(868) 00:10:36.342 fused_ordering(869) 00:10:36.342 fused_ordering(870) 00:10:36.342 fused_ordering(871) 00:10:36.342 fused_ordering(872) 00:10:36.342 fused_ordering(873) 00:10:36.342 fused_ordering(874) 00:10:36.342 fused_ordering(875) 00:10:36.342 fused_ordering(876) 00:10:36.342 fused_ordering(877) 00:10:36.342 fused_ordering(878) 00:10:36.342 fused_ordering(879) 00:10:36.342 fused_ordering(880) 00:10:36.342 fused_ordering(881) 00:10:36.342 fused_ordering(882) 00:10:36.342 fused_ordering(883) 00:10:36.342 fused_ordering(884) 00:10:36.342 fused_ordering(885) 00:10:36.342 fused_ordering(886) 00:10:36.342 fused_ordering(887) 00:10:36.342 fused_ordering(888) 00:10:36.342 fused_ordering(889) 00:10:36.342 fused_ordering(890) 00:10:36.342 fused_ordering(891) 00:10:36.342 fused_ordering(892) 00:10:36.342 fused_ordering(893) 00:10:36.342 fused_ordering(894) 00:10:36.342 fused_ordering(895) 00:10:36.342 fused_ordering(896) 00:10:36.342 fused_ordering(897) 00:10:36.342 fused_ordering(898) 00:10:36.342 fused_ordering(899) 00:10:36.342 fused_ordering(900) 00:10:36.342 fused_ordering(901) 00:10:36.342 fused_ordering(902) 00:10:36.342 fused_ordering(903) 00:10:36.342 fused_ordering(904) 00:10:36.342 fused_ordering(905) 00:10:36.342 fused_ordering(906) 00:10:36.342 fused_ordering(907) 00:10:36.342 fused_ordering(908) 00:10:36.342 fused_ordering(909) 00:10:36.342 fused_ordering(910) 00:10:36.342 fused_ordering(911) 00:10:36.342 fused_ordering(912) 00:10:36.342 fused_ordering(913) 00:10:36.342 fused_ordering(914) 00:10:36.342 fused_ordering(915) 00:10:36.342 fused_ordering(916) 00:10:36.342 fused_ordering(917) 00:10:36.342 fused_ordering(918) 00:10:36.342 fused_ordering(919) 00:10:36.342 fused_ordering(920) 00:10:36.342 fused_ordering(921) 00:10:36.342 fused_ordering(922) 00:10:36.342 fused_ordering(923) 00:10:36.342 fused_ordering(924) 00:10:36.342 fused_ordering(925) 00:10:36.342 fused_ordering(926) 00:10:36.342 fused_ordering(927) 00:10:36.342 fused_ordering(928) 00:10:36.342 fused_ordering(929) 00:10:36.342 fused_ordering(930) 00:10:36.342 fused_ordering(931) 00:10:36.342 fused_ordering(932) 00:10:36.342 fused_ordering(933) 00:10:36.342 fused_ordering(934) 00:10:36.342 fused_ordering(935) 00:10:36.342 fused_ordering(936) 00:10:36.342 fused_ordering(937) 00:10:36.342 fused_ordering(938) 00:10:36.342 fused_ordering(939) 00:10:36.342 fused_ordering(940) 00:10:36.342 fused_ordering(941) 00:10:36.342 fused_ordering(942) 00:10:36.342 fused_ordering(943) 00:10:36.342 fused_ordering(944) 00:10:36.342 fused_ordering(945) 00:10:36.342 fused_ordering(946) 00:10:36.342 fused_ordering(947) 00:10:36.342 fused_ordering(948) 00:10:36.342 fused_ordering(949) 00:10:36.342 fused_ordering(950) 00:10:36.342 fused_ordering(951) 00:10:36.342 fused_ordering(952) 00:10:36.342 fused_ordering(953) 00:10:36.342 fused_ordering(954) 00:10:36.342 fused_ordering(955) 00:10:36.342 fused_ordering(956) 00:10:36.342 fused_ordering(957) 00:10:36.342 fused_ordering(958) 00:10:36.342 fused_ordering(959) 00:10:36.342 fused_ordering(960) 00:10:36.342 fused_ordering(961) 00:10:36.342 fused_ordering(962) 00:10:36.342 fused_ordering(963) 00:10:36.342 fused_ordering(964) 00:10:36.342 fused_ordering(965) 00:10:36.342 fused_ordering(966) 00:10:36.342 fused_ordering(967) 00:10:36.342 fused_ordering(968) 00:10:36.342 fused_ordering(969) 00:10:36.342 fused_ordering(970) 00:10:36.342 fused_ordering(971) 00:10:36.342 fused_ordering(972) 00:10:36.342 fused_ordering(973) 00:10:36.342 fused_ordering(974) 00:10:36.342 fused_ordering(975) 00:10:36.343 fused_ordering(976) 00:10:36.343 fused_ordering(977) 00:10:36.343 fused_ordering(978) 00:10:36.343 fused_ordering(979) 00:10:36.343 fused_ordering(980) 00:10:36.343 fused_ordering(981) 00:10:36.343 fused_ordering(982) 00:10:36.343 fused_ordering(983) 00:10:36.343 fused_ordering(984) 00:10:36.343 fused_ordering(985) 00:10:36.343 fused_ordering(986) 00:10:36.343 fused_ordering(987) 00:10:36.343 fused_ordering(988) 00:10:36.343 fused_ordering(989) 00:10:36.343 fused_ordering(990) 00:10:36.343 fused_ordering(991) 00:10:36.343 fused_ordering(992) 00:10:36.343 fused_ordering(993) 00:10:36.343 fused_ordering(994) 00:10:36.343 fused_ordering(995) 00:10:36.343 fused_ordering(996) 00:10:36.343 fused_ordering(997) 00:10:36.343 fused_ordering(998) 00:10:36.343 fused_ordering(999) 00:10:36.343 fused_ordering(1000) 00:10:36.343 fused_ordering(1001) 00:10:36.343 fused_ordering(1002) 00:10:36.343 fused_ordering(1003) 00:10:36.343 fused_ordering(1004) 00:10:36.343 fused_ordering(1005) 00:10:36.343 fused_ordering(1006) 00:10:36.343 fused_ordering(1007) 00:10:36.343 fused_ordering(1008) 00:10:36.343 fused_ordering(1009) 00:10:36.343 fused_ordering(1010) 00:10:36.343 fused_ordering(1011) 00:10:36.343 fused_ordering(1012) 00:10:36.343 fused_ordering(1013) 00:10:36.343 fused_ordering(1014) 00:10:36.343 fused_ordering(1015) 00:10:36.343 fused_ordering(1016) 00:10:36.343 fused_ordering(1017) 00:10:36.343 fused_ordering(1018) 00:10:36.343 fused_ordering(1019) 00:10:36.343 fused_ordering(1020) 00:10:36.343 fused_ordering(1021) 00:10:36.343 fused_ordering(1022) 00:10:36.343 fused_ordering(1023) 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.343 rmmod nvme_tcp 00:10:36.343 rmmod nvme_fabrics 00:10:36.343 rmmod nvme_keyring 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 76080 ']' 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 76080 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 76080 ']' 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 76080 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76080 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:36.343 killing process with pid 76080 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76080' 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 76080 00:10:36.343 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 76080 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:36.610 00:10:36.610 real 0m4.562s 00:10:36.610 user 0m5.400s 00:10:36.610 sys 0m1.657s 00:10:36.610 ************************************ 00:10:36.610 END TEST nvmf_fused_ordering 00:10:36.610 ************************************ 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.610 ************************************ 00:10:36.610 START TEST nvmf_ns_masking 00:10:36.610 ************************************ 00:10:36.610 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:36.610 * Looking for test storage... 00:10:36.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.867 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3e9ba407-7c9f-4171-b30d-a1b8a672460b 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e3ba6db1-9b81-4551-bc9b-f75a8040a8f0 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=880b68a1-f16a-4460-9b92-3aab36bbd619 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:36.868 Cannot find device "nvmf_tgt_br" 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.868 Cannot find device "nvmf_tgt_br2" 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:36.868 Cannot find device "nvmf_tgt_br" 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:36.868 Cannot find device "nvmf_tgt_br2" 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:36.868 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:37.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:10:37.126 00:10:37.126 --- 10.0.0.2 ping statistics --- 00:10:37.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.126 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:37.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:37.126 00:10:37.126 --- 10.0.0.3 ping statistics --- 00:10:37.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.126 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:37.126 00:10:37.126 --- 10.0.0.1 ping statistics --- 00:10:37.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.126 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.126 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=76350 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 76350 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76350 ']' 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.126 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.127 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.127 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.127 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:37.127 [2024-07-24 17:57:44.084854] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:10:37.127 [2024-07-24 17:57:44.084990] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.384 [2024-07-24 17:57:44.270361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.642 [2024-07-24 17:57:44.462469] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.642 [2024-07-24 17:57:44.462546] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.642 [2024-07-24 17:57:44.462563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.642 [2024-07-24 17:57:44.462577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.642 [2024-07-24 17:57:44.462589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.642 [2024-07-24 17:57:44.462644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.208 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.208 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:10:38.208 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.208 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.208 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:38.208 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.208 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.466 [2024-07-24 17:57:45.375255] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.466 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:38.466 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:38.466 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:38.724 Malloc1 00:10:38.724 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:39.292 Malloc2 00:10:39.292 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.550 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:39.855 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.136 [2024-07-24 17:57:46.875308] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.136 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:40.136 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 880b68a1-f16a-4460-9b92-3aab36bbd619 -a 10.0.0.2 -s 4420 -i 4 00:10:40.136 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.136 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:40.136 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.136 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:40.136 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:42.034 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:42.034 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:42.034 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:42.325 [ 0]:0x1 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60848f640c9a49de8c5c2fb55c4c66a7 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60848f640c9a49de8c5c2fb55c4c66a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:42.325 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:42.585 [ 0]:0x1 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60848f640c9a49de8c5c2fb55c4c66a7 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60848f640c9a49de8c5c2fb55c4c66a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:42.585 [ 1]:0x2 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9228996ef43a4815bfa91d7e1967f4ea 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9228996ef43a4815bfa91d7e1967f4ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:42.585 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:42.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.843 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.102 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:43.361 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:43.361 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 880b68a1-f16a-4460-9b92-3aab36bbd619 -a 10.0.0.2 -s 4420 -i 4 00:10:43.361 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:43.361 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:43.361 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.361 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:43.361 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:43.361 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:45.264 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:45.264 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:45.264 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:45.523 [ 0]:0x2 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9228996ef43a4815bfa91d7e1967f4ea 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9228996ef43a4815bfa91d7e1967f4ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.523 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.783 [ 0]:0x1 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60848f640c9a49de8c5c2fb55c4c66a7 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60848f640c9a49de8c5c2fb55c4c66a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:45.783 [ 1]:0x2 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:45.783 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9228996ef43a4815bfa91d7e1967f4ea 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9228996ef43a4815bfa91d7e1967f4ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:46.042 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:46.042 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:46.042 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:46.301 [ 0]:0x2 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9228996ef43a4815bfa91d7e1967f4ea 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9228996ef43a4815bfa91d7e1967f4ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.301 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:46.558 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:46.559 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 880b68a1-f16a-4460-9b92-3aab36bbd619 -a 10.0.0.2 -s 4420 -i 4 00:10:46.816 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:46.816 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.816 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.816 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:46.816 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:46.816 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:48.775 [ 0]:0x1 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60848f640c9a49de8c5c2fb55c4c66a7 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60848f640c9a49de8c5c2fb55c4c66a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:48.775 [ 1]:0x2 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9228996ef43a4815bfa91d7e1967f4ea 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9228996ef43a4815bfa91d7e1967f4ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:48.775 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:49.033 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.033 [ 0]:0x2 00:10:49.033 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:49.033 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.293 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9228996ef43a4815bfa91d7e1967f4ea 00:10:49.293 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9228996ef43a4815bfa91d7e1967f4ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:49.294 [2024-07-24 17:57:56.240783] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:49.294 2024/07/24 17:57:56 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:10:49.294 request: 00:10:49.294 { 00:10:49.294 "method": "nvmf_ns_remove_host", 00:10:49.294 "params": { 00:10:49.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.294 "nsid": 2, 00:10:49.294 "host": "nqn.2016-06.io.spdk:host1" 00:10:49.294 } 00:10:49.294 } 00:10:49.294 Got JSON-RPC error response 00:10:49.294 GoRPCClient: error on JSON-RPC call 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.294 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:49.559 [ 0]:0x2 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9228996ef43a4815bfa91d7e1967f4ea 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9228996ef43a4815bfa91d7e1967f4ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76727 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76727 /var/tmp/host.sock 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76727 ']' 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:49.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.559 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:49.559 [2024-07-24 17:57:56.489515] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:10:49.559 [2024-07-24 17:57:56.489640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76727 ] 00:10:49.817 [2024-07-24 17:57:56.635996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.817 [2024-07-24 17:57:56.753709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.752 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.752 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:10:50.752 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.752 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:51.011 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3e9ba407-7c9f-4171-b30d-a1b8a672460b 00:10:51.011 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:51.011 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3E9BA4077C9F4171B30DA1B8A672460B -i 00:10:51.270 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e3ba6db1-9b81-4551-bc9b-f75a8040a8f0 00:10:51.270 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:51.270 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E3BA6DB19B814551BC9BF75A8040A8F0 -i 00:10:51.836 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:51.836 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:52.094 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:52.094 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:52.353 nvme0n1 00:10:52.353 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:52.353 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:52.612 nvme1n2 00:10:52.612 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:52.612 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:52.612 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:52.612 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:52.612 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:52.870 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:52.870 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:52.870 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:52.870 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:53.434 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3e9ba407-7c9f-4171-b30d-a1b8a672460b == \3\e\9\b\a\4\0\7\-\7\c\9\f\-\4\1\7\1\-\b\3\0\d\-\a\1\b\8\a\6\7\2\4\6\0\b ]] 00:10:53.434 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:53.434 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:53.434 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e3ba6db1-9b81-4551-bc9b-f75a8040a8f0 == \e\3\b\a\6\d\b\1\-\9\b\8\1\-\4\5\5\1\-\b\c\9\b\-\f\7\5\a\8\0\4\0\a\8\f\0 ]] 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 76727 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76727 ']' 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76727 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76727 00:10:53.691 killing process with pid 76727 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76727' 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76727 00:10:53.691 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76727 00:10:53.949 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.207 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:54.207 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:54.207 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.207 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.207 rmmod nvme_tcp 00:10:54.207 rmmod nvme_fabrics 00:10:54.207 rmmod nvme_keyring 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 76350 ']' 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 76350 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76350 ']' 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76350 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76350 00:10:54.207 killing process with pid 76350 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76350' 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76350 00:10:54.207 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76350 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:54.465 00:10:54.465 real 0m17.869s 00:10:54.465 user 0m27.441s 00:10:54.465 sys 0m3.496s 00:10:54.465 ************************************ 00:10:54.465 END TEST nvmf_ns_masking 00:10:54.465 ************************************ 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.465 ************************************ 00:10:54.465 START TEST nvmf_auth_target 00:10:54.465 ************************************ 00:10:54.465 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:54.724 * Looking for test storage... 00:10:54.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.724 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:54.725 Cannot find device "nvmf_tgt_br" 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.725 Cannot find device "nvmf_tgt_br2" 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:54.725 Cannot find device "nvmf_tgt_br" 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:54.725 Cannot find device "nvmf_tgt_br2" 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:54.725 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:54.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:10:54.984 00:10:54.984 --- 10.0.0.2 ping statistics --- 00:10:54.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.984 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:54.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:54.984 00:10:54.984 --- 10.0.0.3 ping statistics --- 00:10:54.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.984 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:54.984 00:10:54.984 --- 10.0.0.1 ping statistics --- 00:10:54.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.984 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77092 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77092 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77092 ']' 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.984 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77136 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:56.409 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a3d9090a5316040c3419d135a17d9561fe44263a91c80c08 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FwG 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a3d9090a5316040c3419d135a17d9561fe44263a91c80c08 0 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a3d9090a5316040c3419d135a17d9561fe44263a91c80c08 0 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a3d9090a5316040c3419d135a17d9561fe44263a91c80c08 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FwG 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FwG 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.FwG 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=44b3d50584f45583d55809aabbd30259293acfd9be44a96e0d1eb7de4d08d470 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.McJ 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 44b3d50584f45583d55809aabbd30259293acfd9be44a96e0d1eb7de4d08d470 3 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 44b3d50584f45583d55809aabbd30259293acfd9be44a96e0d1eb7de4d08d470 3 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=44b3d50584f45583d55809aabbd30259293acfd9be44a96e0d1eb7de4d08d470 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.McJ 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.McJ 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.McJ 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:56.409 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4990b86c8f939ff5dd50727dc1e560d6 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Gd6 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4990b86c8f939ff5dd50727dc1e560d6 1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4990b86c8f939ff5dd50727dc1e560d6 1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4990b86c8f939ff5dd50727dc1e560d6 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Gd6 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Gd6 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Gd6 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c80d57a97e8b4d4c4cd49efd713d602dad705746a5a976e8 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Lhr 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c80d57a97e8b4d4c4cd49efd713d602dad705746a5a976e8 2 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c80d57a97e8b4d4c4cd49efd713d602dad705746a5a976e8 2 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c80d57a97e8b4d4c4cd49efd713d602dad705746a5a976e8 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Lhr 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Lhr 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Lhr 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=494a6ed63a10f78688e088541187a4bc6e47311387fae6c7 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IjN 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 494a6ed63a10f78688e088541187a4bc6e47311387fae6c7 2 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 494a6ed63a10f78688e088541187a4bc6e47311387fae6c7 2 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=494a6ed63a10f78688e088541187a4bc6e47311387fae6c7 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IjN 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IjN 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.IjN 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=69dac922b76b93e34255acd5037b1d0b 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LRv 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 69dac922b76b93e34255acd5037b1d0b 1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 69dac922b76b93e34255acd5037b1d0b 1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=69dac922b76b93e34255acd5037b1d0b 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LRv 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LRv 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.LRv 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=979fcd3c6bdbdd2afe463a227d8e8abfd50c3aa9ba50f55afc770556085be7f8 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Xwj 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 979fcd3c6bdbdd2afe463a227d8e8abfd50c3aa9ba50f55afc770556085be7f8 3 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 979fcd3c6bdbdd2afe463a227d8e8abfd50c3aa9ba50f55afc770556085be7f8 3 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=979fcd3c6bdbdd2afe463a227d8e8abfd50c3aa9ba50f55afc770556085be7f8 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:56.410 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Xwj 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Xwj 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Xwj 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77092 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77092 ']' 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.668 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77136 /var/tmp/host.sock 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77136 ']' 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.928 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FwG 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FwG 00:10:57.187 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FwG 00:10:57.444 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.McJ ]] 00:10:57.444 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.McJ 00:10:57.444 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.444 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.444 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.444 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.McJ 00:10:57.444 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.McJ 00:10:57.703 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:57.703 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Gd6 00:10:57.703 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.703 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.703 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.703 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Gd6 00:10:57.704 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Gd6 00:10:57.963 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Lhr ]] 00:10:57.963 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lhr 00:10:57.963 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.963 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.963 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.963 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lhr 00:10:57.963 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lhr 00:10:58.221 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:58.221 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.IjN 00:10:58.221 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.221 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.221 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.IjN 00:10:58.221 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.IjN 00:10:58.480 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.LRv ]] 00:10:58.480 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LRv 00:10:58.480 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.480 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.480 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LRv 00:10:58.480 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LRv 00:10:58.739 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:58.739 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Xwj 00:10:58.739 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.739 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.739 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.739 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Xwj 00:10:58.739 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Xwj 00:10:58.997 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:58.997 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:58.997 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:58.997 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.997 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:58.997 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:59.255 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:59.255 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.255 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:59.255 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:59.255 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:59.255 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.255 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.255 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.256 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.256 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.256 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.256 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.823 00:10:59.823 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.823 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.823 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.081 { 00:11:00.081 "auth": { 00:11:00.081 "dhgroup": "null", 00:11:00.081 "digest": "sha256", 00:11:00.081 "state": "completed" 00:11:00.081 }, 00:11:00.081 "cntlid": 1, 00:11:00.081 "listen_address": { 00:11:00.081 "adrfam": "IPv4", 00:11:00.081 "traddr": "10.0.0.2", 00:11:00.081 "trsvcid": "4420", 00:11:00.081 "trtype": "TCP" 00:11:00.081 }, 00:11:00.081 "peer_address": { 00:11:00.081 "adrfam": "IPv4", 00:11:00.081 "traddr": "10.0.0.1", 00:11:00.081 "trsvcid": "43900", 00:11:00.081 "trtype": "TCP" 00:11:00.081 }, 00:11:00.081 "qid": 0, 00:11:00.081 "state": "enabled", 00:11:00.081 "thread": "nvmf_tgt_poll_group_000" 00:11:00.081 } 00:11:00.081 ]' 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.081 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.338 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:11:04.564 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.564 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:04.564 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.564 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.564 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.564 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.564 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:04.564 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.822 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.390 00:11:05.390 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.390 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.390 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.390 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.390 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.390 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.390 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.649 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.649 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.649 { 00:11:05.649 "auth": { 00:11:05.649 "dhgroup": "null", 00:11:05.649 "digest": "sha256", 00:11:05.649 "state": "completed" 00:11:05.649 }, 00:11:05.649 "cntlid": 3, 00:11:05.649 "listen_address": { 00:11:05.649 "adrfam": "IPv4", 00:11:05.649 "traddr": "10.0.0.2", 00:11:05.649 "trsvcid": "4420", 00:11:05.649 "trtype": "TCP" 00:11:05.649 }, 00:11:05.650 "peer_address": { 00:11:05.650 "adrfam": "IPv4", 00:11:05.650 "traddr": "10.0.0.1", 00:11:05.650 "trsvcid": "43926", 00:11:05.650 "trtype": "TCP" 00:11:05.650 }, 00:11:05.650 "qid": 0, 00:11:05.650 "state": "enabled", 00:11:05.650 "thread": "nvmf_tgt_poll_group_000" 00:11:05.650 } 00:11:05.650 ]' 00:11:05.650 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.650 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.650 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.650 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:05.650 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.650 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.650 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.650 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.909 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:11:06.477 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.477 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:06.477 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.477 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.477 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.477 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.477 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:06.477 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.735 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.301 00:11:07.301 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.301 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.301 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.301 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.301 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.302 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.302 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.302 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.302 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.302 { 00:11:07.302 "auth": { 00:11:07.302 "dhgroup": "null", 00:11:07.302 "digest": "sha256", 00:11:07.302 "state": "completed" 00:11:07.302 }, 00:11:07.302 "cntlid": 5, 00:11:07.302 "listen_address": { 00:11:07.302 "adrfam": "IPv4", 00:11:07.302 "traddr": "10.0.0.2", 00:11:07.302 "trsvcid": "4420", 00:11:07.302 "trtype": "TCP" 00:11:07.302 }, 00:11:07.302 "peer_address": { 00:11:07.302 "adrfam": "IPv4", 00:11:07.302 "traddr": "10.0.0.1", 00:11:07.302 "trsvcid": "58362", 00:11:07.302 "trtype": "TCP" 00:11:07.302 }, 00:11:07.302 "qid": 0, 00:11:07.302 "state": "enabled", 00:11:07.302 "thread": "nvmf_tgt_poll_group_000" 00:11:07.302 } 00:11:07.302 ]' 00:11:07.302 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.560 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.560 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.560 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:07.560 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.560 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.560 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.560 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.817 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:11:08.383 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.383 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:08.383 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.383 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.383 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.383 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.383 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:08.383 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.641 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.900 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.900 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:08.900 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:09.160 00:11:09.160 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.160 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.160 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.419 { 00:11:09.419 "auth": { 00:11:09.419 "dhgroup": "null", 00:11:09.419 "digest": "sha256", 00:11:09.419 "state": "completed" 00:11:09.419 }, 00:11:09.419 "cntlid": 7, 00:11:09.419 "listen_address": { 00:11:09.419 "adrfam": "IPv4", 00:11:09.419 "traddr": "10.0.0.2", 00:11:09.419 "trsvcid": "4420", 00:11:09.419 "trtype": "TCP" 00:11:09.419 }, 00:11:09.419 "peer_address": { 00:11:09.419 "adrfam": "IPv4", 00:11:09.419 "traddr": "10.0.0.1", 00:11:09.419 "trsvcid": "58382", 00:11:09.419 "trtype": "TCP" 00:11:09.419 }, 00:11:09.419 "qid": 0, 00:11:09.419 "state": "enabled", 00:11:09.419 "thread": "nvmf_tgt_poll_group_000" 00:11:09.419 } 00:11:09.419 ]' 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.419 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.678 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:09.678 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.678 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.678 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.678 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.936 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:10.503 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.070 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.328 00:11:11.328 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.328 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.328 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.586 { 00:11:11.586 "auth": { 00:11:11.586 "dhgroup": "ffdhe2048", 00:11:11.586 "digest": "sha256", 00:11:11.586 "state": "completed" 00:11:11.586 }, 00:11:11.586 "cntlid": 9, 00:11:11.586 "listen_address": { 00:11:11.586 "adrfam": "IPv4", 00:11:11.586 "traddr": "10.0.0.2", 00:11:11.586 "trsvcid": "4420", 00:11:11.586 "trtype": "TCP" 00:11:11.586 }, 00:11:11.586 "peer_address": { 00:11:11.586 "adrfam": "IPv4", 00:11:11.586 "traddr": "10.0.0.1", 00:11:11.586 "trsvcid": "58404", 00:11:11.586 "trtype": "TCP" 00:11:11.586 }, 00:11:11.586 "qid": 0, 00:11:11.586 "state": "enabled", 00:11:11.586 "thread": "nvmf_tgt_poll_group_000" 00:11:11.586 } 00:11:11.586 ]' 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.586 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.845 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:11.845 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.845 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.845 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.845 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.104 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:11:12.672 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.672 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:12.672 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.672 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.672 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.672 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.672 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:12.672 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.930 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.496 00:11:13.496 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.496 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.496 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.763 { 00:11:13.763 "auth": { 00:11:13.763 "dhgroup": "ffdhe2048", 00:11:13.763 "digest": "sha256", 00:11:13.763 "state": "completed" 00:11:13.763 }, 00:11:13.763 "cntlid": 11, 00:11:13.763 "listen_address": { 00:11:13.763 "adrfam": "IPv4", 00:11:13.763 "traddr": "10.0.0.2", 00:11:13.763 "trsvcid": "4420", 00:11:13.763 "trtype": "TCP" 00:11:13.763 }, 00:11:13.763 "peer_address": { 00:11:13.763 "adrfam": "IPv4", 00:11:13.763 "traddr": "10.0.0.1", 00:11:13.763 "trsvcid": "58428", 00:11:13.763 "trtype": "TCP" 00:11:13.763 }, 00:11:13.763 "qid": 0, 00:11:13.763 "state": "enabled", 00:11:13.763 "thread": "nvmf_tgt_poll_group_000" 00:11:13.763 } 00:11:13.763 ]' 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.763 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.328 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:11:14.894 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.894 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:14.894 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.894 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.894 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.894 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.894 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:14.894 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.152 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.410 00:11:15.410 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.410 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.410 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.976 { 00:11:15.976 "auth": { 00:11:15.976 "dhgroup": "ffdhe2048", 00:11:15.976 "digest": "sha256", 00:11:15.976 "state": "completed" 00:11:15.976 }, 00:11:15.976 "cntlid": 13, 00:11:15.976 "listen_address": { 00:11:15.976 "adrfam": "IPv4", 00:11:15.976 "traddr": "10.0.0.2", 00:11:15.976 "trsvcid": "4420", 00:11:15.976 "trtype": "TCP" 00:11:15.976 }, 00:11:15.976 "peer_address": { 00:11:15.976 "adrfam": "IPv4", 00:11:15.976 "traddr": "10.0.0.1", 00:11:15.976 "trsvcid": "58470", 00:11:15.976 "trtype": "TCP" 00:11:15.976 }, 00:11:15.976 "qid": 0, 00:11:15.976 "state": "enabled", 00:11:15.976 "thread": "nvmf_tgt_poll_group_000" 00:11:15.976 } 00:11:15.976 ]' 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.976 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.234 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:11:16.803 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.061 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:17.061 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.061 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.061 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.061 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.061 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.061 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.320 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.578 00:11:17.578 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.578 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.578 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.837 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.837 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.837 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.837 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.837 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.837 { 00:11:17.837 "auth": { 00:11:17.837 "dhgroup": "ffdhe2048", 00:11:17.837 "digest": "sha256", 00:11:17.837 "state": "completed" 00:11:17.837 }, 00:11:17.838 "cntlid": 15, 00:11:17.838 "listen_address": { 00:11:17.838 "adrfam": "IPv4", 00:11:17.838 "traddr": "10.0.0.2", 00:11:17.838 "trsvcid": "4420", 00:11:17.838 "trtype": "TCP" 00:11:17.838 }, 00:11:17.838 "peer_address": { 00:11:17.838 "adrfam": "IPv4", 00:11:17.838 "traddr": "10.0.0.1", 00:11:17.838 "trsvcid": "36926", 00:11:17.838 "trtype": "TCP" 00:11:17.838 }, 00:11:17.838 "qid": 0, 00:11:17.838 "state": "enabled", 00:11:17.838 "thread": "nvmf_tgt_poll_group_000" 00:11:17.838 } 00:11:17.838 ]' 00:11:17.838 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.838 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.838 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.096 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.096 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.096 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.096 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.097 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.355 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:18.923 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.492 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.493 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.493 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.493 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.493 00:11:19.781 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.781 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.781 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.039 { 00:11:20.039 "auth": { 00:11:20.039 "dhgroup": "ffdhe3072", 00:11:20.039 "digest": "sha256", 00:11:20.039 "state": "completed" 00:11:20.039 }, 00:11:20.039 "cntlid": 17, 00:11:20.039 "listen_address": { 00:11:20.039 "adrfam": "IPv4", 00:11:20.039 "traddr": "10.0.0.2", 00:11:20.039 "trsvcid": "4420", 00:11:20.039 "trtype": "TCP" 00:11:20.039 }, 00:11:20.039 "peer_address": { 00:11:20.039 "adrfam": "IPv4", 00:11:20.039 "traddr": "10.0.0.1", 00:11:20.039 "trsvcid": "36964", 00:11:20.039 "trtype": "TCP" 00:11:20.039 }, 00:11:20.039 "qid": 0, 00:11:20.039 "state": "enabled", 00:11:20.039 "thread": "nvmf_tgt_poll_group_000" 00:11:20.039 } 00:11:20.039 ]' 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.039 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.605 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:11:21.174 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.174 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:21.174 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.174 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.174 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.174 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.174 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:21.174 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.433 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.691 00:11:21.691 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.691 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.691 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.949 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.949 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.949 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.949 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.205 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.205 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.205 { 00:11:22.205 "auth": { 00:11:22.205 "dhgroup": "ffdhe3072", 00:11:22.205 "digest": "sha256", 00:11:22.205 "state": "completed" 00:11:22.205 }, 00:11:22.205 "cntlid": 19, 00:11:22.205 "listen_address": { 00:11:22.205 "adrfam": "IPv4", 00:11:22.206 "traddr": "10.0.0.2", 00:11:22.206 "trsvcid": "4420", 00:11:22.206 "trtype": "TCP" 00:11:22.206 }, 00:11:22.206 "peer_address": { 00:11:22.206 "adrfam": "IPv4", 00:11:22.206 "traddr": "10.0.0.1", 00:11:22.206 "trsvcid": "37000", 00:11:22.206 "trtype": "TCP" 00:11:22.206 }, 00:11:22.206 "qid": 0, 00:11:22.206 "state": "enabled", 00:11:22.206 "thread": "nvmf_tgt_poll_group_000" 00:11:22.206 } 00:11:22.206 ]' 00:11:22.206 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.206 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.206 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.206 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:22.206 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.206 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.206 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.206 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.462 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:11:23.396 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.396 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:23.396 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.396 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.396 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.396 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.396 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.396 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.655 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.913 00:11:23.913 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.913 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.913 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.171 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.171 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.171 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.171 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.171 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.172 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.172 { 00:11:24.172 "auth": { 00:11:24.172 "dhgroup": "ffdhe3072", 00:11:24.172 "digest": "sha256", 00:11:24.172 "state": "completed" 00:11:24.172 }, 00:11:24.172 "cntlid": 21, 00:11:24.172 "listen_address": { 00:11:24.172 "adrfam": "IPv4", 00:11:24.172 "traddr": "10.0.0.2", 00:11:24.172 "trsvcid": "4420", 00:11:24.172 "trtype": "TCP" 00:11:24.172 }, 00:11:24.172 "peer_address": { 00:11:24.172 "adrfam": "IPv4", 00:11:24.172 "traddr": "10.0.0.1", 00:11:24.172 "trsvcid": "37026", 00:11:24.172 "trtype": "TCP" 00:11:24.172 }, 00:11:24.172 "qid": 0, 00:11:24.172 "state": "enabled", 00:11:24.172 "thread": "nvmf_tgt_poll_group_000" 00:11:24.172 } 00:11:24.172 ]' 00:11:24.172 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.172 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.172 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.172 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.172 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.172 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.172 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.172 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.429 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:11:25.002 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.002 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:25.002 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.002 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.002 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.002 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.002 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.002 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.265 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.521 00:11:25.521 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.521 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.521 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.780 { 00:11:25.780 "auth": { 00:11:25.780 "dhgroup": "ffdhe3072", 00:11:25.780 "digest": "sha256", 00:11:25.780 "state": "completed" 00:11:25.780 }, 00:11:25.780 "cntlid": 23, 00:11:25.780 "listen_address": { 00:11:25.780 "adrfam": "IPv4", 00:11:25.780 "traddr": "10.0.0.2", 00:11:25.780 "trsvcid": "4420", 00:11:25.780 "trtype": "TCP" 00:11:25.780 }, 00:11:25.780 "peer_address": { 00:11:25.780 "adrfam": "IPv4", 00:11:25.780 "traddr": "10.0.0.1", 00:11:25.780 "trsvcid": "37058", 00:11:25.780 "trtype": "TCP" 00:11:25.780 }, 00:11:25.780 "qid": 0, 00:11:25.780 "state": "enabled", 00:11:25.780 "thread": "nvmf_tgt_poll_group_000" 00:11:25.780 } 00:11:25.780 ]' 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.780 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.038 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.038 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.038 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.038 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.038 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.295 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:11:26.859 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.859 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:26.859 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.859 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.116 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:27.116 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.116 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:27.116 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.375 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.633 00:11:27.633 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.633 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.633 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.199 { 00:11:28.199 "auth": { 00:11:28.199 "dhgroup": "ffdhe4096", 00:11:28.199 "digest": "sha256", 00:11:28.199 "state": "completed" 00:11:28.199 }, 00:11:28.199 "cntlid": 25, 00:11:28.199 "listen_address": { 00:11:28.199 "adrfam": "IPv4", 00:11:28.199 "traddr": "10.0.0.2", 00:11:28.199 "trsvcid": "4420", 00:11:28.199 "trtype": "TCP" 00:11:28.199 }, 00:11:28.199 "peer_address": { 00:11:28.199 "adrfam": "IPv4", 00:11:28.199 "traddr": "10.0.0.1", 00:11:28.199 "trsvcid": "43398", 00:11:28.199 "trtype": "TCP" 00:11:28.199 }, 00:11:28.199 "qid": 0, 00:11:28.199 "state": "enabled", 00:11:28.199 "thread": "nvmf_tgt_poll_group_000" 00:11:28.199 } 00:11:28.199 ]' 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:28.199 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.199 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.199 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.199 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.457 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:11:29.024 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.024 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:29.024 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.024 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.024 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.024 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.024 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:29.024 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.282 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.540 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.871 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.871 { 00:11:29.871 "auth": { 00:11:29.871 "dhgroup": "ffdhe4096", 00:11:29.871 "digest": "sha256", 00:11:29.871 "state": "completed" 00:11:29.871 }, 00:11:29.871 "cntlid": 27, 00:11:29.871 "listen_address": { 00:11:29.871 "adrfam": "IPv4", 00:11:29.871 "traddr": "10.0.0.2", 00:11:29.871 "trsvcid": "4420", 00:11:29.871 "trtype": "TCP" 00:11:29.871 }, 00:11:29.871 "peer_address": { 00:11:29.871 "adrfam": "IPv4", 00:11:29.871 "traddr": "10.0.0.1", 00:11:29.871 "trsvcid": "43428", 00:11:29.871 "trtype": "TCP" 00:11:29.871 }, 00:11:29.871 "qid": 0, 00:11:29.871 "state": "enabled", 00:11:29.871 "thread": "nvmf_tgt_poll_group_000" 00:11:29.871 } 00:11:29.871 ]' 00:11:30.130 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.130 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.130 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.130 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:30.130 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.130 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.130 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.130 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.388 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:11:30.954 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.954 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:30.954 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.954 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.213 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.213 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.213 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:31.213 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.472 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.732 00:11:31.732 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.732 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.732 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.990 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.990 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.990 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.991 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.991 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.991 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.991 { 00:11:31.991 "auth": { 00:11:31.991 "dhgroup": "ffdhe4096", 00:11:31.991 "digest": "sha256", 00:11:31.991 "state": "completed" 00:11:31.991 }, 00:11:31.991 "cntlid": 29, 00:11:31.991 "listen_address": { 00:11:31.991 "adrfam": "IPv4", 00:11:31.991 "traddr": "10.0.0.2", 00:11:31.991 "trsvcid": "4420", 00:11:31.991 "trtype": "TCP" 00:11:31.991 }, 00:11:31.991 "peer_address": { 00:11:31.991 "adrfam": "IPv4", 00:11:31.991 "traddr": "10.0.0.1", 00:11:31.991 "trsvcid": "43446", 00:11:31.991 "trtype": "TCP" 00:11:31.991 }, 00:11:31.991 "qid": 0, 00:11:31.991 "state": "enabled", 00:11:31.991 "thread": "nvmf_tgt_poll_group_000" 00:11:31.991 } 00:11:31.991 ]' 00:11:31.991 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.991 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.991 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.249 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:32.249 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.249 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.249 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.249 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.515 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:11:33.102 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.102 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:33.102 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.102 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.102 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.102 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.102 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:33.102 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.360 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.361 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.361 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.618 00:11:33.876 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.876 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.876 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.876 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.136 { 00:11:34.136 "auth": { 00:11:34.136 "dhgroup": "ffdhe4096", 00:11:34.136 "digest": "sha256", 00:11:34.136 "state": "completed" 00:11:34.136 }, 00:11:34.136 "cntlid": 31, 00:11:34.136 "listen_address": { 00:11:34.136 "adrfam": "IPv4", 00:11:34.136 "traddr": "10.0.0.2", 00:11:34.136 "trsvcid": "4420", 00:11:34.136 "trtype": "TCP" 00:11:34.136 }, 00:11:34.136 "peer_address": { 00:11:34.136 "adrfam": "IPv4", 00:11:34.136 "traddr": "10.0.0.1", 00:11:34.136 "trsvcid": "43480", 00:11:34.136 "trtype": "TCP" 00:11:34.136 }, 00:11:34.136 "qid": 0, 00:11:34.136 "state": "enabled", 00:11:34.136 "thread": "nvmf_tgt_poll_group_000" 00:11:34.136 } 00:11:34.136 ]' 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.136 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.396 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:11:35.338 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.338 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:35.338 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.338 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.338 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.338 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:35.338 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.338 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:35.339 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.597 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.856 00:11:35.856 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.856 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.856 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.423 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.423 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.423 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.423 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.423 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.423 { 00:11:36.423 "auth": { 00:11:36.423 "dhgroup": "ffdhe6144", 00:11:36.423 "digest": "sha256", 00:11:36.423 "state": "completed" 00:11:36.423 }, 00:11:36.424 "cntlid": 33, 00:11:36.424 "listen_address": { 00:11:36.424 "adrfam": "IPv4", 00:11:36.424 "traddr": "10.0.0.2", 00:11:36.424 "trsvcid": "4420", 00:11:36.424 "trtype": "TCP" 00:11:36.424 }, 00:11:36.424 "peer_address": { 00:11:36.424 "adrfam": "IPv4", 00:11:36.424 "traddr": "10.0.0.1", 00:11:36.424 "trsvcid": "43514", 00:11:36.424 "trtype": "TCP" 00:11:36.424 }, 00:11:36.424 "qid": 0, 00:11:36.424 "state": "enabled", 00:11:36.424 "thread": "nvmf_tgt_poll_group_000" 00:11:36.424 } 00:11:36.424 ]' 00:11:36.424 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.424 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.424 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.424 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:36.424 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.424 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.424 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.424 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.683 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:11:37.617 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.617 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:37.617 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.617 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.617 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.617 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.617 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:37.617 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.875 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.134 00:11:38.134 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.134 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.134 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.700 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.701 { 00:11:38.701 "auth": { 00:11:38.701 "dhgroup": "ffdhe6144", 00:11:38.701 "digest": "sha256", 00:11:38.701 "state": "completed" 00:11:38.701 }, 00:11:38.701 "cntlid": 35, 00:11:38.701 "listen_address": { 00:11:38.701 "adrfam": "IPv4", 00:11:38.701 "traddr": "10.0.0.2", 00:11:38.701 "trsvcid": "4420", 00:11:38.701 "trtype": "TCP" 00:11:38.701 }, 00:11:38.701 "peer_address": { 00:11:38.701 "adrfam": "IPv4", 00:11:38.701 "traddr": "10.0.0.1", 00:11:38.701 "trsvcid": "39526", 00:11:38.701 "trtype": "TCP" 00:11:38.701 }, 00:11:38.701 "qid": 0, 00:11:38.701 "state": "enabled", 00:11:38.701 "thread": "nvmf_tgt_poll_group_000" 00:11:38.701 } 00:11:38.701 ]' 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.701 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.960 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:11:39.526 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.527 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:39.527 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.527 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.527 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.527 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.527 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:39.527 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.094 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.354 00:11:40.354 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.354 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.354 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.612 { 00:11:40.612 "auth": { 00:11:40.612 "dhgroup": "ffdhe6144", 00:11:40.612 "digest": "sha256", 00:11:40.612 "state": "completed" 00:11:40.612 }, 00:11:40.612 "cntlid": 37, 00:11:40.612 "listen_address": { 00:11:40.612 "adrfam": "IPv4", 00:11:40.612 "traddr": "10.0.0.2", 00:11:40.612 "trsvcid": "4420", 00:11:40.612 "trtype": "TCP" 00:11:40.612 }, 00:11:40.612 "peer_address": { 00:11:40.612 "adrfam": "IPv4", 00:11:40.612 "traddr": "10.0.0.1", 00:11:40.612 "trsvcid": "39540", 00:11:40.612 "trtype": "TCP" 00:11:40.612 }, 00:11:40.612 "qid": 0, 00:11:40.612 "state": "enabled", 00:11:40.612 "thread": "nvmf_tgt_poll_group_000" 00:11:40.612 } 00:11:40.612 ]' 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.612 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.871 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.807 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:42.374 00:11:42.374 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.374 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.374 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.632 { 00:11:42.632 "auth": { 00:11:42.632 "dhgroup": "ffdhe6144", 00:11:42.632 "digest": "sha256", 00:11:42.632 "state": "completed" 00:11:42.632 }, 00:11:42.632 "cntlid": 39, 00:11:42.632 "listen_address": { 00:11:42.632 "adrfam": "IPv4", 00:11:42.632 "traddr": "10.0.0.2", 00:11:42.632 "trsvcid": "4420", 00:11:42.632 "trtype": "TCP" 00:11:42.632 }, 00:11:42.632 "peer_address": { 00:11:42.632 "adrfam": "IPv4", 00:11:42.632 "traddr": "10.0.0.1", 00:11:42.632 "trsvcid": "39576", 00:11:42.632 "trtype": "TCP" 00:11:42.632 }, 00:11:42.632 "qid": 0, 00:11:42.632 "state": "enabled", 00:11:42.632 "thread": "nvmf_tgt_poll_group_000" 00:11:42.632 } 00:11:42.632 ]' 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.632 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.891 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:43.456 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.021 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.587 00:11:44.587 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.587 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.587 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.844 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.844 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.844 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.844 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.844 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.844 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.844 { 00:11:44.844 "auth": { 00:11:44.844 "dhgroup": "ffdhe8192", 00:11:44.844 "digest": "sha256", 00:11:44.844 "state": "completed" 00:11:44.844 }, 00:11:44.844 "cntlid": 41, 00:11:44.844 "listen_address": { 00:11:44.844 "adrfam": "IPv4", 00:11:44.844 "traddr": "10.0.0.2", 00:11:44.844 "trsvcid": "4420", 00:11:44.844 "trtype": "TCP" 00:11:44.844 }, 00:11:44.844 "peer_address": { 00:11:44.844 "adrfam": "IPv4", 00:11:44.844 "traddr": "10.0.0.1", 00:11:44.844 "trsvcid": "39612", 00:11:44.844 "trtype": "TCP" 00:11:44.844 }, 00:11:44.844 "qid": 0, 00:11:44.844 "state": "enabled", 00:11:44.844 "thread": "nvmf_tgt_poll_group_000" 00:11:44.844 } 00:11:44.844 ]' 00:11:44.844 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.844 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.845 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.845 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:44.845 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.845 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.845 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.845 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.102 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:11:45.666 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.666 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:45.666 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.666 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.666 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.666 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.666 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:45.666 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.924 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.859 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.859 { 00:11:46.859 "auth": { 00:11:46.859 "dhgroup": "ffdhe8192", 00:11:46.859 "digest": "sha256", 00:11:46.859 "state": "completed" 00:11:46.859 }, 00:11:46.859 "cntlid": 43, 00:11:46.859 "listen_address": { 00:11:46.859 "adrfam": "IPv4", 00:11:46.859 "traddr": "10.0.0.2", 00:11:46.859 "trsvcid": "4420", 00:11:46.859 "trtype": "TCP" 00:11:46.859 }, 00:11:46.859 "peer_address": { 00:11:46.859 "adrfam": "IPv4", 00:11:46.859 "traddr": "10.0.0.1", 00:11:46.859 "trsvcid": "34576", 00:11:46.859 "trtype": "TCP" 00:11:46.859 }, 00:11:46.859 "qid": 0, 00:11:46.859 "state": "enabled", 00:11:46.859 "thread": "nvmf_tgt_poll_group_000" 00:11:46.859 } 00:11:46.859 ]' 00:11:46.859 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.118 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.118 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.118 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.118 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.118 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.118 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.118 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.377 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:11:47.944 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.944 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:47.944 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.944 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.944 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.944 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.944 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:47.944 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.202 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.768 00:11:48.768 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.768 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.768 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.334 { 00:11:49.334 "auth": { 00:11:49.334 "dhgroup": "ffdhe8192", 00:11:49.334 "digest": "sha256", 00:11:49.334 "state": "completed" 00:11:49.334 }, 00:11:49.334 "cntlid": 45, 00:11:49.334 "listen_address": { 00:11:49.334 "adrfam": "IPv4", 00:11:49.334 "traddr": "10.0.0.2", 00:11:49.334 "trsvcid": "4420", 00:11:49.334 "trtype": "TCP" 00:11:49.334 }, 00:11:49.334 "peer_address": { 00:11:49.334 "adrfam": "IPv4", 00:11:49.334 "traddr": "10.0.0.1", 00:11:49.334 "trsvcid": "34596", 00:11:49.334 "trtype": "TCP" 00:11:49.334 }, 00:11:49.334 "qid": 0, 00:11:49.334 "state": "enabled", 00:11:49.334 "thread": "nvmf_tgt_poll_group_000" 00:11:49.334 } 00:11:49.334 ]' 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.334 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.592 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.525 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:50.526 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:51.460 00:11:51.460 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.460 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.460 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.718 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.718 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.718 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.719 { 00:11:51.719 "auth": { 00:11:51.719 "dhgroup": "ffdhe8192", 00:11:51.719 "digest": "sha256", 00:11:51.719 "state": "completed" 00:11:51.719 }, 00:11:51.719 "cntlid": 47, 00:11:51.719 "listen_address": { 00:11:51.719 "adrfam": "IPv4", 00:11:51.719 "traddr": "10.0.0.2", 00:11:51.719 "trsvcid": "4420", 00:11:51.719 "trtype": "TCP" 00:11:51.719 }, 00:11:51.719 "peer_address": { 00:11:51.719 "adrfam": "IPv4", 00:11:51.719 "traddr": "10.0.0.1", 00:11:51.719 "trsvcid": "34626", 00:11:51.719 "trtype": "TCP" 00:11:51.719 }, 00:11:51.719 "qid": 0, 00:11:51.719 "state": "enabled", 00:11:51.719 "thread": "nvmf_tgt_poll_group_000" 00:11:51.719 } 00:11:51.719 ]' 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.719 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.978 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:52.546 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.113 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.372 00:11:53.372 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.372 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.372 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.631 { 00:11:53.631 "auth": { 00:11:53.631 "dhgroup": "null", 00:11:53.631 "digest": "sha384", 00:11:53.631 "state": "completed" 00:11:53.631 }, 00:11:53.631 "cntlid": 49, 00:11:53.631 "listen_address": { 00:11:53.631 "adrfam": "IPv4", 00:11:53.631 "traddr": "10.0.0.2", 00:11:53.631 "trsvcid": "4420", 00:11:53.631 "trtype": "TCP" 00:11:53.631 }, 00:11:53.631 "peer_address": { 00:11:53.631 "adrfam": "IPv4", 00:11:53.631 "traddr": "10.0.0.1", 00:11:53.631 "trsvcid": "34654", 00:11:53.631 "trtype": "TCP" 00:11:53.631 }, 00:11:53.631 "qid": 0, 00:11:53.631 "state": "enabled", 00:11:53.631 "thread": "nvmf_tgt_poll_group_000" 00:11:53.631 } 00:11:53.631 ]' 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:53.631 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.890 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.890 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.890 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.148 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:11:54.714 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.715 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:54.715 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.715 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.715 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.715 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.715 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:54.715 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.973 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.231 00:11:55.231 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.231 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.231 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.490 { 00:11:55.490 "auth": { 00:11:55.490 "dhgroup": "null", 00:11:55.490 "digest": "sha384", 00:11:55.490 "state": "completed" 00:11:55.490 }, 00:11:55.490 "cntlid": 51, 00:11:55.490 "listen_address": { 00:11:55.490 "adrfam": "IPv4", 00:11:55.490 "traddr": "10.0.0.2", 00:11:55.490 "trsvcid": "4420", 00:11:55.490 "trtype": "TCP" 00:11:55.490 }, 00:11:55.490 "peer_address": { 00:11:55.490 "adrfam": "IPv4", 00:11:55.490 "traddr": "10.0.0.1", 00:11:55.490 "trsvcid": "34692", 00:11:55.490 "trtype": "TCP" 00:11:55.490 }, 00:11:55.490 "qid": 0, 00:11:55.490 "state": "enabled", 00:11:55.490 "thread": "nvmf_tgt_poll_group_000" 00:11:55.490 } 00:11:55.490 ]' 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:55.490 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.748 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.748 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.748 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.007 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:11:56.574 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.574 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:56.574 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.574 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.574 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.574 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.574 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:56.574 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.832 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.108 00:11:57.108 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.108 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.108 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.365 { 00:11:57.365 "auth": { 00:11:57.365 "dhgroup": "null", 00:11:57.365 "digest": "sha384", 00:11:57.365 "state": "completed" 00:11:57.365 }, 00:11:57.365 "cntlid": 53, 00:11:57.365 "listen_address": { 00:11:57.365 "adrfam": "IPv4", 00:11:57.365 "traddr": "10.0.0.2", 00:11:57.365 "trsvcid": "4420", 00:11:57.365 "trtype": "TCP" 00:11:57.365 }, 00:11:57.365 "peer_address": { 00:11:57.365 "adrfam": "IPv4", 00:11:57.365 "traddr": "10.0.0.1", 00:11:57.365 "trsvcid": "60152", 00:11:57.365 "trtype": "TCP" 00:11:57.365 }, 00:11:57.365 "qid": 0, 00:11:57.365 "state": "enabled", 00:11:57.365 "thread": "nvmf_tgt_poll_group_000" 00:11:57.365 } 00:11:57.365 ]' 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.365 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.622 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:11:58.188 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.188 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:11:58.188 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.188 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:58.446 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.039 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.039 { 00:11:59.039 "auth": { 00:11:59.039 "dhgroup": "null", 00:11:59.039 "digest": "sha384", 00:11:59.039 "state": "completed" 00:11:59.039 }, 00:11:59.039 "cntlid": 55, 00:11:59.039 "listen_address": { 00:11:59.039 "adrfam": "IPv4", 00:11:59.039 "traddr": "10.0.0.2", 00:11:59.039 "trsvcid": "4420", 00:11:59.039 "trtype": "TCP" 00:11:59.039 }, 00:11:59.039 "peer_address": { 00:11:59.039 "adrfam": "IPv4", 00:11:59.039 "traddr": "10.0.0.1", 00:11:59.039 "trsvcid": "60180", 00:11:59.039 "trtype": "TCP" 00:11:59.039 }, 00:11:59.039 "qid": 0, 00:11:59.039 "state": "enabled", 00:11:59.039 "thread": "nvmf_tgt_poll_group_000" 00:11:59.039 } 00:11:59.039 ]' 00:11:59.039 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.319 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.319 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.319 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:59.319 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.319 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.319 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.319 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.577 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:00.144 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.711 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.970 00:12:00.970 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.970 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.970 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.970 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.970 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.970 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.970 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.229 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.229 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.229 { 00:12:01.229 "auth": { 00:12:01.229 "dhgroup": "ffdhe2048", 00:12:01.229 "digest": "sha384", 00:12:01.229 "state": "completed" 00:12:01.229 }, 00:12:01.229 "cntlid": 57, 00:12:01.229 "listen_address": { 00:12:01.229 "adrfam": "IPv4", 00:12:01.229 "traddr": "10.0.0.2", 00:12:01.229 "trsvcid": "4420", 00:12:01.229 "trtype": "TCP" 00:12:01.229 }, 00:12:01.229 "peer_address": { 00:12:01.229 "adrfam": "IPv4", 00:12:01.229 "traddr": "10.0.0.1", 00:12:01.229 "trsvcid": "60202", 00:12:01.229 "trtype": "TCP" 00:12:01.229 }, 00:12:01.229 "qid": 0, 00:12:01.229 "state": "enabled", 00:12:01.229 "thread": "nvmf_tgt_poll_group_000" 00:12:01.229 } 00:12:01.229 ]' 00:12:01.229 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.229 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.229 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.229 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:01.229 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.229 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.229 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.229 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.489 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:12:02.057 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.057 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:02.057 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.057 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.057 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.057 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.057 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:02.057 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.315 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.573 00:12:02.573 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.573 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.573 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.830 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.830 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.830 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.830 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.830 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.830 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.830 { 00:12:02.830 "auth": { 00:12:02.830 "dhgroup": "ffdhe2048", 00:12:02.830 "digest": "sha384", 00:12:02.830 "state": "completed" 00:12:02.830 }, 00:12:02.830 "cntlid": 59, 00:12:02.830 "listen_address": { 00:12:02.830 "adrfam": "IPv4", 00:12:02.830 "traddr": "10.0.0.2", 00:12:02.830 "trsvcid": "4420", 00:12:02.830 "trtype": "TCP" 00:12:02.830 }, 00:12:02.830 "peer_address": { 00:12:02.830 "adrfam": "IPv4", 00:12:02.830 "traddr": "10.0.0.1", 00:12:02.830 "trsvcid": "60214", 00:12:02.830 "trtype": "TCP" 00:12:02.830 }, 00:12:02.830 "qid": 0, 00:12:02.830 "state": "enabled", 00:12:02.830 "thread": "nvmf_tgt_poll_group_000" 00:12:02.830 } 00:12:02.830 ]' 00:12:02.830 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.089 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.089 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.089 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.089 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.089 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.089 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.089 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.347 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:12:03.914 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.914 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:03.914 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.914 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.914 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.914 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.914 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:03.914 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.172 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.430 00:12:04.688 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.688 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.688 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.946 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.946 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.946 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.947 { 00:12:04.947 "auth": { 00:12:04.947 "dhgroup": "ffdhe2048", 00:12:04.947 "digest": "sha384", 00:12:04.947 "state": "completed" 00:12:04.947 }, 00:12:04.947 "cntlid": 61, 00:12:04.947 "listen_address": { 00:12:04.947 "adrfam": "IPv4", 00:12:04.947 "traddr": "10.0.0.2", 00:12:04.947 "trsvcid": "4420", 00:12:04.947 "trtype": "TCP" 00:12:04.947 }, 00:12:04.947 "peer_address": { 00:12:04.947 "adrfam": "IPv4", 00:12:04.947 "traddr": "10.0.0.1", 00:12:04.947 "trsvcid": "60236", 00:12:04.947 "trtype": "TCP" 00:12:04.947 }, 00:12:04.947 "qid": 0, 00:12:04.947 "state": "enabled", 00:12:04.947 "thread": "nvmf_tgt_poll_group_000" 00:12:04.947 } 00:12:04.947 ]' 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.947 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.204 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:12:05.769 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.769 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:05.769 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.769 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.769 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.769 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.769 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.770 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:06.028 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:06.286 00:12:06.286 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.286 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.286 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.544 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.544 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.544 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.544 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.544 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.544 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.544 { 00:12:06.544 "auth": { 00:12:06.544 "dhgroup": "ffdhe2048", 00:12:06.544 "digest": "sha384", 00:12:06.544 "state": "completed" 00:12:06.544 }, 00:12:06.544 "cntlid": 63, 00:12:06.544 "listen_address": { 00:12:06.544 "adrfam": "IPv4", 00:12:06.544 "traddr": "10.0.0.2", 00:12:06.544 "trsvcid": "4420", 00:12:06.544 "trtype": "TCP" 00:12:06.544 }, 00:12:06.544 "peer_address": { 00:12:06.544 "adrfam": "IPv4", 00:12:06.544 "traddr": "10.0.0.1", 00:12:06.544 "trsvcid": "49522", 00:12:06.544 "trtype": "TCP" 00:12:06.544 }, 00:12:06.544 "qid": 0, 00:12:06.544 "state": "enabled", 00:12:06.544 "thread": "nvmf_tgt_poll_group_000" 00:12:06.544 } 00:12:06.544 ]' 00:12:06.544 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.802 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.802 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.802 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.802 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.802 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.802 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.802 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.060 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:07.625 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.883 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.448 00:12:08.448 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.448 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.448 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.706 { 00:12:08.706 "auth": { 00:12:08.706 "dhgroup": "ffdhe3072", 00:12:08.706 "digest": "sha384", 00:12:08.706 "state": "completed" 00:12:08.706 }, 00:12:08.706 "cntlid": 65, 00:12:08.706 "listen_address": { 00:12:08.706 "adrfam": "IPv4", 00:12:08.706 "traddr": "10.0.0.2", 00:12:08.706 "trsvcid": "4420", 00:12:08.706 "trtype": "TCP" 00:12:08.706 }, 00:12:08.706 "peer_address": { 00:12:08.706 "adrfam": "IPv4", 00:12:08.706 "traddr": "10.0.0.1", 00:12:08.706 "trsvcid": "49552", 00:12:08.706 "trtype": "TCP" 00:12:08.706 }, 00:12:08.706 "qid": 0, 00:12:08.706 "state": "enabled", 00:12:08.706 "thread": "nvmf_tgt_poll_group_000" 00:12:08.706 } 00:12:08.706 ]' 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.706 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.964 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.898 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.463 00:12:10.463 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.463 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.463 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.722 { 00:12:10.722 "auth": { 00:12:10.722 "dhgroup": "ffdhe3072", 00:12:10.722 "digest": "sha384", 00:12:10.722 "state": "completed" 00:12:10.722 }, 00:12:10.722 "cntlid": 67, 00:12:10.722 "listen_address": { 00:12:10.722 "adrfam": "IPv4", 00:12:10.722 "traddr": "10.0.0.2", 00:12:10.722 "trsvcid": "4420", 00:12:10.722 "trtype": "TCP" 00:12:10.722 }, 00:12:10.722 "peer_address": { 00:12:10.722 "adrfam": "IPv4", 00:12:10.722 "traddr": "10.0.0.1", 00:12:10.722 "trsvcid": "49596", 00:12:10.722 "trtype": "TCP" 00:12:10.722 }, 00:12:10.722 "qid": 0, 00:12:10.722 "state": "enabled", 00:12:10.722 "thread": "nvmf_tgt_poll_group_000" 00:12:10.722 } 00:12:10.722 ]' 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.722 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.980 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:12:11.608 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.608 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:11.608 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.608 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.867 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.867 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.867 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:11.867 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.126 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.385 00:12:12.385 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.385 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.385 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.643 { 00:12:12.643 "auth": { 00:12:12.643 "dhgroup": "ffdhe3072", 00:12:12.643 "digest": "sha384", 00:12:12.643 "state": "completed" 00:12:12.643 }, 00:12:12.643 "cntlid": 69, 00:12:12.643 "listen_address": { 00:12:12.643 "adrfam": "IPv4", 00:12:12.643 "traddr": "10.0.0.2", 00:12:12.643 "trsvcid": "4420", 00:12:12.643 "trtype": "TCP" 00:12:12.643 }, 00:12:12.643 "peer_address": { 00:12:12.643 "adrfam": "IPv4", 00:12:12.643 "traddr": "10.0.0.1", 00:12:12.643 "trsvcid": "49620", 00:12:12.643 "trtype": "TCP" 00:12:12.643 }, 00:12:12.643 "qid": 0, 00:12:12.643 "state": "enabled", 00:12:12.643 "thread": "nvmf_tgt_poll_group_000" 00:12:12.643 } 00:12:12.643 ]' 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.643 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.902 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.902 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.902 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.160 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:12:13.726 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.726 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:13.726 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.726 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.726 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.726 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.726 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:13.726 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.293 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.294 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.294 00:12:14.552 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.552 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.552 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.552 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.810 { 00:12:14.810 "auth": { 00:12:14.810 "dhgroup": "ffdhe3072", 00:12:14.810 "digest": "sha384", 00:12:14.810 "state": "completed" 00:12:14.810 }, 00:12:14.810 "cntlid": 71, 00:12:14.810 "listen_address": { 00:12:14.810 "adrfam": "IPv4", 00:12:14.810 "traddr": "10.0.0.2", 00:12:14.810 "trsvcid": "4420", 00:12:14.810 "trtype": "TCP" 00:12:14.810 }, 00:12:14.810 "peer_address": { 00:12:14.810 "adrfam": "IPv4", 00:12:14.810 "traddr": "10.0.0.1", 00:12:14.810 "trsvcid": "49642", 00:12:14.810 "trtype": "TCP" 00:12:14.810 }, 00:12:14.810 "qid": 0, 00:12:14.810 "state": "enabled", 00:12:14.810 "thread": "nvmf_tgt_poll_group_000" 00:12:14.810 } 00:12:14.810 ]' 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.810 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.067 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.003 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.262 00:12:16.523 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.523 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.523 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.781 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.781 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.781 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.781 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.781 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.781 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.781 { 00:12:16.781 "auth": { 00:12:16.781 "dhgroup": "ffdhe4096", 00:12:16.781 "digest": "sha384", 00:12:16.781 "state": "completed" 00:12:16.781 }, 00:12:16.781 "cntlid": 73, 00:12:16.781 "listen_address": { 00:12:16.781 "adrfam": "IPv4", 00:12:16.781 "traddr": "10.0.0.2", 00:12:16.781 "trsvcid": "4420", 00:12:16.781 "trtype": "TCP" 00:12:16.781 }, 00:12:16.781 "peer_address": { 00:12:16.781 "adrfam": "IPv4", 00:12:16.781 "traddr": "10.0.0.1", 00:12:16.782 "trsvcid": "44670", 00:12:16.782 "trtype": "TCP" 00:12:16.782 }, 00:12:16.782 "qid": 0, 00:12:16.782 "state": "enabled", 00:12:16.782 "thread": "nvmf_tgt_poll_group_000" 00:12:16.782 } 00:12:16.782 ]' 00:12:16.782 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.782 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.782 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.782 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.782 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.782 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.782 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.782 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.040 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.036 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.037 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.603 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.603 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.603 { 00:12:18.603 "auth": { 00:12:18.603 "dhgroup": "ffdhe4096", 00:12:18.603 "digest": "sha384", 00:12:18.603 "state": "completed" 00:12:18.603 }, 00:12:18.603 "cntlid": 75, 00:12:18.604 "listen_address": { 00:12:18.604 "adrfam": "IPv4", 00:12:18.604 "traddr": "10.0.0.2", 00:12:18.604 "trsvcid": "4420", 00:12:18.604 "trtype": "TCP" 00:12:18.604 }, 00:12:18.604 "peer_address": { 00:12:18.604 "adrfam": "IPv4", 00:12:18.604 "traddr": "10.0.0.1", 00:12:18.604 "trsvcid": "44700", 00:12:18.604 "trtype": "TCP" 00:12:18.604 }, 00:12:18.604 "qid": 0, 00:12:18.604 "state": "enabled", 00:12:18.604 "thread": "nvmf_tgt_poll_group_000" 00:12:18.604 } 00:12:18.604 ]' 00:12:18.604 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.862 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.862 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.862 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.862 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.862 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.862 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.862 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.141 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:12:19.706 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.706 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:19.706 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.706 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.706 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.706 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.706 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:19.707 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.964 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.221 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.221 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.221 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.478 00:12:20.478 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.478 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.478 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.736 { 00:12:20.736 "auth": { 00:12:20.736 "dhgroup": "ffdhe4096", 00:12:20.736 "digest": "sha384", 00:12:20.736 "state": "completed" 00:12:20.736 }, 00:12:20.736 "cntlid": 77, 00:12:20.736 "listen_address": { 00:12:20.736 "adrfam": "IPv4", 00:12:20.736 "traddr": "10.0.0.2", 00:12:20.736 "trsvcid": "4420", 00:12:20.736 "trtype": "TCP" 00:12:20.736 }, 00:12:20.736 "peer_address": { 00:12:20.736 "adrfam": "IPv4", 00:12:20.736 "traddr": "10.0.0.1", 00:12:20.736 "trsvcid": "44720", 00:12:20.736 "trtype": "TCP" 00:12:20.736 }, 00:12:20.736 "qid": 0, 00:12:20.736 "state": "enabled", 00:12:20.736 "thread": "nvmf_tgt_poll_group_000" 00:12:20.736 } 00:12:20.736 ]' 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.736 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.993 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.925 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.539 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.539 { 00:12:22.539 "auth": { 00:12:22.539 "dhgroup": "ffdhe4096", 00:12:22.539 "digest": "sha384", 00:12:22.539 "state": "completed" 00:12:22.539 }, 00:12:22.539 "cntlid": 79, 00:12:22.539 "listen_address": { 00:12:22.539 "adrfam": "IPv4", 00:12:22.539 "traddr": "10.0.0.2", 00:12:22.539 "trsvcid": "4420", 00:12:22.539 "trtype": "TCP" 00:12:22.539 }, 00:12:22.539 "peer_address": { 00:12:22.539 "adrfam": "IPv4", 00:12:22.539 "traddr": "10.0.0.1", 00:12:22.539 "trsvcid": "44736", 00:12:22.539 "trtype": "TCP" 00:12:22.539 }, 00:12:22.539 "qid": 0, 00:12:22.539 "state": "enabled", 00:12:22.539 "thread": "nvmf_tgt_poll_group_000" 00:12:22.539 } 00:12:22.539 ]' 00:12:22.539 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.797 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.797 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.797 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:22.797 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.797 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.797 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.798 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.056 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:12:23.623 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.623 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:23.623 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.624 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.624 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.624 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.624 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.624 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:23.624 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.190 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.191 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.191 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.449 00:12:24.449 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.449 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.449 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.708 { 00:12:24.708 "auth": { 00:12:24.708 "dhgroup": "ffdhe6144", 00:12:24.708 "digest": "sha384", 00:12:24.708 "state": "completed" 00:12:24.708 }, 00:12:24.708 "cntlid": 81, 00:12:24.708 "listen_address": { 00:12:24.708 "adrfam": "IPv4", 00:12:24.708 "traddr": "10.0.0.2", 00:12:24.708 "trsvcid": "4420", 00:12:24.708 "trtype": "TCP" 00:12:24.708 }, 00:12:24.708 "peer_address": { 00:12:24.708 "adrfam": "IPv4", 00:12:24.708 "traddr": "10.0.0.1", 00:12:24.708 "trsvcid": "44754", 00:12:24.708 "trtype": "TCP" 00:12:24.708 }, 00:12:24.708 "qid": 0, 00:12:24.708 "state": "enabled", 00:12:24.708 "thread": "nvmf_tgt_poll_group_000" 00:12:24.708 } 00:12:24.708 ]' 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.708 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.967 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.904 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.471 00:12:26.471 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.471 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.471 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.748 { 00:12:26.748 "auth": { 00:12:26.748 "dhgroup": "ffdhe6144", 00:12:26.748 "digest": "sha384", 00:12:26.748 "state": "completed" 00:12:26.748 }, 00:12:26.748 "cntlid": 83, 00:12:26.748 "listen_address": { 00:12:26.748 "adrfam": "IPv4", 00:12:26.748 "traddr": "10.0.0.2", 00:12:26.748 "trsvcid": "4420", 00:12:26.748 "trtype": "TCP" 00:12:26.748 }, 00:12:26.748 "peer_address": { 00:12:26.748 "adrfam": "IPv4", 00:12:26.748 "traddr": "10.0.0.1", 00:12:26.748 "trsvcid": "53546", 00:12:26.748 "trtype": "TCP" 00:12:26.748 }, 00:12:26.748 "qid": 0, 00:12:26.748 "state": "enabled", 00:12:26.748 "thread": "nvmf_tgt_poll_group_000" 00:12:26.748 } 00:12:26.748 ]' 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.748 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.026 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.960 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.526 00:12:28.526 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.526 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.526 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.785 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.785 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.785 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.785 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.785 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.785 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.785 { 00:12:28.785 "auth": { 00:12:28.785 "dhgroup": "ffdhe6144", 00:12:28.785 "digest": "sha384", 00:12:28.785 "state": "completed" 00:12:28.785 }, 00:12:28.785 "cntlid": 85, 00:12:28.785 "listen_address": { 00:12:28.785 "adrfam": "IPv4", 00:12:28.785 "traddr": "10.0.0.2", 00:12:28.785 "trsvcid": "4420", 00:12:28.785 "trtype": "TCP" 00:12:28.785 }, 00:12:28.785 "peer_address": { 00:12:28.785 "adrfam": "IPv4", 00:12:28.785 "traddr": "10.0.0.1", 00:12:28.785 "trsvcid": "53568", 00:12:28.785 "trtype": "TCP" 00:12:28.785 }, 00:12:28.785 "qid": 0, 00:12:28.785 "state": "enabled", 00:12:28.785 "thread": "nvmf_tgt_poll_group_000" 00:12:28.785 } 00:12:28.785 ]' 00:12:28.785 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.044 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.044 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.044 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:29.044 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.044 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.044 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.044 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.303 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:12:30.237 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.238 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:30.238 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.238 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.238 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.238 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.238 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:30.238 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:30.496 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:30.754 00:12:30.754 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.754 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.754 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.320 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.320 { 00:12:31.320 "auth": { 00:12:31.320 "dhgroup": "ffdhe6144", 00:12:31.320 "digest": "sha384", 00:12:31.320 "state": "completed" 00:12:31.320 }, 00:12:31.320 "cntlid": 87, 00:12:31.320 "listen_address": { 00:12:31.320 "adrfam": "IPv4", 00:12:31.320 "traddr": "10.0.0.2", 00:12:31.320 "trsvcid": "4420", 00:12:31.320 "trtype": "TCP" 00:12:31.320 }, 00:12:31.320 "peer_address": { 00:12:31.320 "adrfam": "IPv4", 00:12:31.320 "traddr": "10.0.0.1", 00:12:31.320 "trsvcid": "53590", 00:12:31.320 "trtype": "TCP" 00:12:31.320 }, 00:12:31.320 "qid": 0, 00:12:31.320 "state": "enabled", 00:12:31.320 "thread": "nvmf_tgt_poll_group_000" 00:12:31.320 } 00:12:31.320 ]' 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.320 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.578 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.183 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.749 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.314 00:12:33.314 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.314 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.314 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.573 { 00:12:33.573 "auth": { 00:12:33.573 "dhgroup": "ffdhe8192", 00:12:33.573 "digest": "sha384", 00:12:33.573 "state": "completed" 00:12:33.573 }, 00:12:33.573 "cntlid": 89, 00:12:33.573 "listen_address": { 00:12:33.573 "adrfam": "IPv4", 00:12:33.573 "traddr": "10.0.0.2", 00:12:33.573 "trsvcid": "4420", 00:12:33.573 "trtype": "TCP" 00:12:33.573 }, 00:12:33.573 "peer_address": { 00:12:33.573 "adrfam": "IPv4", 00:12:33.573 "traddr": "10.0.0.1", 00:12:33.573 "trsvcid": "53624", 00:12:33.573 "trtype": "TCP" 00:12:33.573 }, 00:12:33.573 "qid": 0, 00:12:33.573 "state": "enabled", 00:12:33.573 "thread": "nvmf_tgt_poll_group_000" 00:12:33.573 } 00:12:33.573 ]' 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.573 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.869 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:12:34.801 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.801 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:34.801 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.801 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.801 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.801 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.801 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:34.801 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.059 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.625 00:12:35.625 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.625 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.625 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.883 { 00:12:35.883 "auth": { 00:12:35.883 "dhgroup": "ffdhe8192", 00:12:35.883 "digest": "sha384", 00:12:35.883 "state": "completed" 00:12:35.883 }, 00:12:35.883 "cntlid": 91, 00:12:35.883 "listen_address": { 00:12:35.883 "adrfam": "IPv4", 00:12:35.883 "traddr": "10.0.0.2", 00:12:35.883 "trsvcid": "4420", 00:12:35.883 "trtype": "TCP" 00:12:35.883 }, 00:12:35.883 "peer_address": { 00:12:35.883 "adrfam": "IPv4", 00:12:35.883 "traddr": "10.0.0.1", 00:12:35.883 "trsvcid": "53654", 00:12:35.883 "trtype": "TCP" 00:12:35.883 }, 00:12:35.883 "qid": 0, 00:12:35.883 "state": "enabled", 00:12:35.883 "thread": "nvmf_tgt_poll_group_000" 00:12:35.883 } 00:12:35.883 ]' 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:35.883 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.142 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.142 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.142 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.417 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:12:36.988 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.988 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:36.988 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.988 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.989 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.989 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.989 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:36.989 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.246 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.812 00:12:37.812 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.812 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.812 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.070 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.070 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.070 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.070 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.070 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.070 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.070 { 00:12:38.070 "auth": { 00:12:38.070 "dhgroup": "ffdhe8192", 00:12:38.070 "digest": "sha384", 00:12:38.070 "state": "completed" 00:12:38.070 }, 00:12:38.070 "cntlid": 93, 00:12:38.070 "listen_address": { 00:12:38.070 "adrfam": "IPv4", 00:12:38.071 "traddr": "10.0.0.2", 00:12:38.071 "trsvcid": "4420", 00:12:38.071 "trtype": "TCP" 00:12:38.071 }, 00:12:38.071 "peer_address": { 00:12:38.071 "adrfam": "IPv4", 00:12:38.071 "traddr": "10.0.0.1", 00:12:38.071 "trsvcid": "42244", 00:12:38.071 "trtype": "TCP" 00:12:38.071 }, 00:12:38.071 "qid": 0, 00:12:38.071 "state": "enabled", 00:12:38.071 "thread": "nvmf_tgt_poll_group_000" 00:12:38.071 } 00:12:38.071 ]' 00:12:38.071 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.071 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.071 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.329 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.329 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.329 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.329 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.329 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.588 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:12:39.155 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.155 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:39.155 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.155 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.155 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.155 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.155 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:39.155 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.414 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.673 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.673 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.673 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.239 00:12:40.239 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.239 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.239 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.498 { 00:12:40.498 "auth": { 00:12:40.498 "dhgroup": "ffdhe8192", 00:12:40.498 "digest": "sha384", 00:12:40.498 "state": "completed" 00:12:40.498 }, 00:12:40.498 "cntlid": 95, 00:12:40.498 "listen_address": { 00:12:40.498 "adrfam": "IPv4", 00:12:40.498 "traddr": "10.0.0.2", 00:12:40.498 "trsvcid": "4420", 00:12:40.498 "trtype": "TCP" 00:12:40.498 }, 00:12:40.498 "peer_address": { 00:12:40.498 "adrfam": "IPv4", 00:12:40.498 "traddr": "10.0.0.1", 00:12:40.498 "trsvcid": "42272", 00:12:40.498 "trtype": "TCP" 00:12:40.498 }, 00:12:40.498 "qid": 0, 00:12:40.498 "state": "enabled", 00:12:40.498 "thread": "nvmf_tgt_poll_group_000" 00:12:40.498 } 00:12:40.498 ]' 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.498 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.756 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.756 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.756 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.020 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:41.612 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.870 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.437 00:12:42.437 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.437 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.437 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.695 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.695 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.695 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.695 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.695 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.695 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.695 { 00:12:42.695 "auth": { 00:12:42.695 "dhgroup": "null", 00:12:42.695 "digest": "sha512", 00:12:42.695 "state": "completed" 00:12:42.695 }, 00:12:42.695 "cntlid": 97, 00:12:42.695 "listen_address": { 00:12:42.695 "adrfam": "IPv4", 00:12:42.695 "traddr": "10.0.0.2", 00:12:42.695 "trsvcid": "4420", 00:12:42.696 "trtype": "TCP" 00:12:42.696 }, 00:12:42.696 "peer_address": { 00:12:42.696 "adrfam": "IPv4", 00:12:42.696 "traddr": "10.0.0.1", 00:12:42.696 "trsvcid": "42292", 00:12:42.696 "trtype": "TCP" 00:12:42.696 }, 00:12:42.696 "qid": 0, 00:12:42.696 "state": "enabled", 00:12:42.696 "thread": "nvmf_tgt_poll_group_000" 00:12:42.696 } 00:12:42.696 ]' 00:12:42.696 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.696 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.696 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.696 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:42.696 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.696 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.696 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.696 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.954 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:12:43.890 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.890 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:43.890 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.890 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.890 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.890 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.890 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:43.890 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.149 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.408 00:12:44.408 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.408 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.408 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.665 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.665 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.665 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.665 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.665 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.665 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.665 { 00:12:44.665 "auth": { 00:12:44.665 "dhgroup": "null", 00:12:44.665 "digest": "sha512", 00:12:44.665 "state": "completed" 00:12:44.665 }, 00:12:44.665 "cntlid": 99, 00:12:44.665 "listen_address": { 00:12:44.665 "adrfam": "IPv4", 00:12:44.665 "traddr": "10.0.0.2", 00:12:44.665 "trsvcid": "4420", 00:12:44.665 "trtype": "TCP" 00:12:44.665 }, 00:12:44.665 "peer_address": { 00:12:44.665 "adrfam": "IPv4", 00:12:44.665 "traddr": "10.0.0.1", 00:12:44.665 "trsvcid": "42328", 00:12:44.665 "trtype": "TCP" 00:12:44.665 }, 00:12:44.665 "qid": 0, 00:12:44.665 "state": "enabled", 00:12:44.665 "thread": "nvmf_tgt_poll_group_000" 00:12:44.665 } 00:12:44.665 ]' 00:12:44.665 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.924 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.924 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.924 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:44.924 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.924 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.924 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.924 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.183 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:12:45.751 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.751 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:45.751 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.751 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.751 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.751 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.751 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:45.751 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:46.008 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:46.008 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.008 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.008 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:46.008 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:46.008 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.008 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.009 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.009 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.009 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.009 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.009 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.266 00:12:46.266 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.266 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.266 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.525 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.525 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.525 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.525 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.784 { 00:12:46.784 "auth": { 00:12:46.784 "dhgroup": "null", 00:12:46.784 "digest": "sha512", 00:12:46.784 "state": "completed" 00:12:46.784 }, 00:12:46.784 "cntlid": 101, 00:12:46.784 "listen_address": { 00:12:46.784 "adrfam": "IPv4", 00:12:46.784 "traddr": "10.0.0.2", 00:12:46.784 "trsvcid": "4420", 00:12:46.784 "trtype": "TCP" 00:12:46.784 }, 00:12:46.784 "peer_address": { 00:12:46.784 "adrfam": "IPv4", 00:12:46.784 "traddr": "10.0.0.1", 00:12:46.784 "trsvcid": "40694", 00:12:46.784 "trtype": "TCP" 00:12:46.784 }, 00:12:46.784 "qid": 0, 00:12:46.784 "state": "enabled", 00:12:46.784 "thread": "nvmf_tgt_poll_group_000" 00:12:46.784 } 00:12:46.784 ]' 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.784 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.042 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:12:48.046 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.046 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:48.046 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.046 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.046 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.046 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.046 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:48.046 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:48.046 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:48.046 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.046 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:48.046 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:48.046 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:48.046 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.047 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:12:48.047 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.047 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.047 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.047 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:48.047 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:48.614 00:12:48.614 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.614 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.614 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.873 { 00:12:48.873 "auth": { 00:12:48.873 "dhgroup": "null", 00:12:48.873 "digest": "sha512", 00:12:48.873 "state": "completed" 00:12:48.873 }, 00:12:48.873 "cntlid": 103, 00:12:48.873 "listen_address": { 00:12:48.873 "adrfam": "IPv4", 00:12:48.873 "traddr": "10.0.0.2", 00:12:48.873 "trsvcid": "4420", 00:12:48.873 "trtype": "TCP" 00:12:48.873 }, 00:12:48.873 "peer_address": { 00:12:48.873 "adrfam": "IPv4", 00:12:48.873 "traddr": "10.0.0.1", 00:12:48.873 "trsvcid": "40724", 00:12:48.873 "trtype": "TCP" 00:12:48.873 }, 00:12:48.873 "qid": 0, 00:12:48.873 "state": "enabled", 00:12:48.873 "thread": "nvmf_tgt_poll_group_000" 00:12:48.873 } 00:12:48.873 ]' 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.873 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.442 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:50.010 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.268 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.835 00:12:50.835 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.835 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.835 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.093 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.093 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.093 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.093 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.093 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.093 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.093 { 00:12:51.093 "auth": { 00:12:51.093 "dhgroup": "ffdhe2048", 00:12:51.093 "digest": "sha512", 00:12:51.093 "state": "completed" 00:12:51.093 }, 00:12:51.093 "cntlid": 105, 00:12:51.093 "listen_address": { 00:12:51.093 "adrfam": "IPv4", 00:12:51.093 "traddr": "10.0.0.2", 00:12:51.093 "trsvcid": "4420", 00:12:51.093 "trtype": "TCP" 00:12:51.093 }, 00:12:51.093 "peer_address": { 00:12:51.093 "adrfam": "IPv4", 00:12:51.093 "traddr": "10.0.0.1", 00:12:51.093 "trsvcid": "40732", 00:12:51.093 "trtype": "TCP" 00:12:51.093 }, 00:12:51.093 "qid": 0, 00:12:51.093 "state": "enabled", 00:12:51.093 "thread": "nvmf_tgt_poll_group_000" 00:12:51.093 } 00:12:51.093 ]' 00:12:51.093 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.094 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.094 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.094 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.094 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.094 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.094 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.094 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.352 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:12:52.326 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.326 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:52.326 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.326 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.326 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.326 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.326 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:52.326 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.326 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.896 00:12:52.896 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.896 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.896 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.154 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.154 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.154 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.154 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.154 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.154 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.154 { 00:12:53.154 "auth": { 00:12:53.154 "dhgroup": "ffdhe2048", 00:12:53.154 "digest": "sha512", 00:12:53.154 "state": "completed" 00:12:53.154 }, 00:12:53.154 "cntlid": 107, 00:12:53.154 "listen_address": { 00:12:53.154 "adrfam": "IPv4", 00:12:53.154 "traddr": "10.0.0.2", 00:12:53.154 "trsvcid": "4420", 00:12:53.154 "trtype": "TCP" 00:12:53.154 }, 00:12:53.154 "peer_address": { 00:12:53.154 "adrfam": "IPv4", 00:12:53.154 "traddr": "10.0.0.1", 00:12:53.154 "trsvcid": "40748", 00:12:53.154 "trtype": "TCP" 00:12:53.154 }, 00:12:53.154 "qid": 0, 00:12:53.154 "state": "enabled", 00:12:53.154 "thread": "nvmf_tgt_poll_group_000" 00:12:53.154 } 00:12:53.154 ]' 00:12:53.154 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.154 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.154 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.154 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:53.154 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.154 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.154 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.154 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.720 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:12:54.286 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.286 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:54.286 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.286 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.286 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.286 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.286 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:54.286 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.544 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.802 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.802 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.802 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.062 00:12:55.062 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.062 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.062 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.335 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.335 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.335 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.335 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.336 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.336 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.336 { 00:12:55.336 "auth": { 00:12:55.336 "dhgroup": "ffdhe2048", 00:12:55.336 "digest": "sha512", 00:12:55.336 "state": "completed" 00:12:55.336 }, 00:12:55.336 "cntlid": 109, 00:12:55.336 "listen_address": { 00:12:55.336 "adrfam": "IPv4", 00:12:55.336 "traddr": "10.0.0.2", 00:12:55.336 "trsvcid": "4420", 00:12:55.336 "trtype": "TCP" 00:12:55.336 }, 00:12:55.336 "peer_address": { 00:12:55.336 "adrfam": "IPv4", 00:12:55.336 "traddr": "10.0.0.1", 00:12:55.336 "trsvcid": "40776", 00:12:55.336 "trtype": "TCP" 00:12:55.336 }, 00:12:55.336 "qid": 0, 00:12:55.336 "state": "enabled", 00:12:55.336 "thread": "nvmf_tgt_poll_group_000" 00:12:55.336 } 00:12:55.336 ]' 00:12:55.336 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.336 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.336 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.602 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:55.602 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.602 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.602 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.602 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.871 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:12:56.463 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.463 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:56.463 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.463 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.463 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.463 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.463 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.736 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.012 00:12:57.290 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.290 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.290 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.561 { 00:12:57.561 "auth": { 00:12:57.561 "dhgroup": "ffdhe2048", 00:12:57.561 "digest": "sha512", 00:12:57.561 "state": "completed" 00:12:57.561 }, 00:12:57.561 "cntlid": 111, 00:12:57.561 "listen_address": { 00:12:57.561 "adrfam": "IPv4", 00:12:57.561 "traddr": "10.0.0.2", 00:12:57.561 "trsvcid": "4420", 00:12:57.561 "trtype": "TCP" 00:12:57.561 }, 00:12:57.561 "peer_address": { 00:12:57.561 "adrfam": "IPv4", 00:12:57.561 "traddr": "10.0.0.1", 00:12:57.561 "trsvcid": "51816", 00:12:57.561 "trtype": "TCP" 00:12:57.561 }, 00:12:57.561 "qid": 0, 00:12:57.561 "state": "enabled", 00:12:57.561 "thread": "nvmf_tgt_poll_group_000" 00:12:57.561 } 00:12:57.561 ]' 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.561 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.819 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.755 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:59.013 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.014 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.272 00:12:59.272 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.272 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.272 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.530 { 00:12:59.530 "auth": { 00:12:59.530 "dhgroup": "ffdhe3072", 00:12:59.530 "digest": "sha512", 00:12:59.530 "state": "completed" 00:12:59.530 }, 00:12:59.530 "cntlid": 113, 00:12:59.530 "listen_address": { 00:12:59.530 "adrfam": "IPv4", 00:12:59.530 "traddr": "10.0.0.2", 00:12:59.530 "trsvcid": "4420", 00:12:59.530 "trtype": "TCP" 00:12:59.530 }, 00:12:59.530 "peer_address": { 00:12:59.530 "adrfam": "IPv4", 00:12:59.530 "traddr": "10.0.0.1", 00:12:59.530 "trsvcid": "51838", 00:12:59.530 "trtype": "TCP" 00:12:59.530 }, 00:12:59.530 "qid": 0, 00:12:59.530 "state": "enabled", 00:12:59.530 "thread": "nvmf_tgt_poll_group_000" 00:12:59.530 } 00:12:59.530 ]' 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.530 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.097 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:13:00.664 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.664 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:00.664 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.664 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.664 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.664 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.664 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.664 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.922 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.221 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.510 { 00:13:01.510 "auth": { 00:13:01.510 "dhgroup": "ffdhe3072", 00:13:01.510 "digest": "sha512", 00:13:01.510 "state": "completed" 00:13:01.510 }, 00:13:01.510 "cntlid": 115, 00:13:01.510 "listen_address": { 00:13:01.510 "adrfam": "IPv4", 00:13:01.510 "traddr": "10.0.0.2", 00:13:01.510 "trsvcid": "4420", 00:13:01.510 "trtype": "TCP" 00:13:01.510 }, 00:13:01.510 "peer_address": { 00:13:01.510 "adrfam": "IPv4", 00:13:01.510 "traddr": "10.0.0.1", 00:13:01.510 "trsvcid": "51860", 00:13:01.510 "trtype": "TCP" 00:13:01.510 }, 00:13:01.510 "qid": 0, 00:13:01.510 "state": "enabled", 00:13:01.510 "thread": "nvmf_tgt_poll_group_000" 00:13:01.510 } 00:13:01.510 ]' 00:13:01.510 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.768 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.768 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.768 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:01.768 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.768 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.768 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.768 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.026 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:13:02.594 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.594 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:02.594 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.594 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.594 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.594 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.594 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:02.594 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.162 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.420 00:13:03.420 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.420 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.420 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.420 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.678 { 00:13:03.678 "auth": { 00:13:03.678 "dhgroup": "ffdhe3072", 00:13:03.678 "digest": "sha512", 00:13:03.678 "state": "completed" 00:13:03.678 }, 00:13:03.678 "cntlid": 117, 00:13:03.678 "listen_address": { 00:13:03.678 "adrfam": "IPv4", 00:13:03.678 "traddr": "10.0.0.2", 00:13:03.678 "trsvcid": "4420", 00:13:03.678 "trtype": "TCP" 00:13:03.678 }, 00:13:03.678 "peer_address": { 00:13:03.678 "adrfam": "IPv4", 00:13:03.678 "traddr": "10.0.0.1", 00:13:03.678 "trsvcid": "51890", 00:13:03.678 "trtype": "TCP" 00:13:03.678 }, 00:13:03.678 "qid": 0, 00:13:03.678 "state": "enabled", 00:13:03.678 "thread": "nvmf_tgt_poll_group_000" 00:13:03.678 } 00:13:03.678 ]' 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.678 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.939 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:13:04.522 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.522 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:04.522 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.522 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.522 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.522 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.522 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:04.523 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:04.790 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:04.790 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.790 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:04.790 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:04.790 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:04.790 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.790 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:13:04.791 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.791 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.791 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.791 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:04.791 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:05.051 00:13:05.051 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.051 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.051 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.310 { 00:13:05.310 "auth": { 00:13:05.310 "dhgroup": "ffdhe3072", 00:13:05.310 "digest": "sha512", 00:13:05.310 "state": "completed" 00:13:05.310 }, 00:13:05.310 "cntlid": 119, 00:13:05.310 "listen_address": { 00:13:05.310 "adrfam": "IPv4", 00:13:05.310 "traddr": "10.0.0.2", 00:13:05.310 "trsvcid": "4420", 00:13:05.310 "trtype": "TCP" 00:13:05.310 }, 00:13:05.310 "peer_address": { 00:13:05.310 "adrfam": "IPv4", 00:13:05.310 "traddr": "10.0.0.1", 00:13:05.310 "trsvcid": "51920", 00:13:05.310 "trtype": "TCP" 00:13:05.310 }, 00:13:05.310 "qid": 0, 00:13:05.310 "state": "enabled", 00:13:05.310 "thread": "nvmf_tgt_poll_group_000" 00:13:05.310 } 00:13:05.310 ]' 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.310 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.877 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.444 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.702 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.959 00:13:06.959 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.959 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.959 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.216 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.216 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.216 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.216 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.474 { 00:13:07.474 "auth": { 00:13:07.474 "dhgroup": "ffdhe4096", 00:13:07.474 "digest": "sha512", 00:13:07.474 "state": "completed" 00:13:07.474 }, 00:13:07.474 "cntlid": 121, 00:13:07.474 "listen_address": { 00:13:07.474 "adrfam": "IPv4", 00:13:07.474 "traddr": "10.0.0.2", 00:13:07.474 "trsvcid": "4420", 00:13:07.474 "trtype": "TCP" 00:13:07.474 }, 00:13:07.474 "peer_address": { 00:13:07.474 "adrfam": "IPv4", 00:13:07.474 "traddr": "10.0.0.1", 00:13:07.474 "trsvcid": "60142", 00:13:07.474 "trtype": "TCP" 00:13:07.474 }, 00:13:07.474 "qid": 0, 00:13:07.474 "state": "enabled", 00:13:07.474 "thread": "nvmf_tgt_poll_group_000" 00:13:07.474 } 00:13:07.474 ]' 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.474 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.730 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:13:08.295 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.295 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:08.295 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.295 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.295 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.295 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.295 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:08.295 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.860 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.117 00:13:09.117 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.117 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.117 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.375 { 00:13:09.375 "auth": { 00:13:09.375 "dhgroup": "ffdhe4096", 00:13:09.375 "digest": "sha512", 00:13:09.375 "state": "completed" 00:13:09.375 }, 00:13:09.375 "cntlid": 123, 00:13:09.375 "listen_address": { 00:13:09.375 "adrfam": "IPv4", 00:13:09.375 "traddr": "10.0.0.2", 00:13:09.375 "trsvcid": "4420", 00:13:09.375 "trtype": "TCP" 00:13:09.375 }, 00:13:09.375 "peer_address": { 00:13:09.375 "adrfam": "IPv4", 00:13:09.375 "traddr": "10.0.0.1", 00:13:09.375 "trsvcid": "60172", 00:13:09.375 "trtype": "TCP" 00:13:09.375 }, 00:13:09.375 "qid": 0, 00:13:09.375 "state": "enabled", 00:13:09.375 "thread": "nvmf_tgt_poll_group_000" 00:13:09.375 } 00:13:09.375 ]' 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.375 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.638 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:13:10.203 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.461 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:10.461 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.461 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.461 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.461 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.461 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:10.461 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.719 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.977 00:13:10.978 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.978 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.978 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.542 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.542 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.542 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.542 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.542 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.542 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.542 { 00:13:11.542 "auth": { 00:13:11.542 "dhgroup": "ffdhe4096", 00:13:11.542 "digest": "sha512", 00:13:11.542 "state": "completed" 00:13:11.542 }, 00:13:11.542 "cntlid": 125, 00:13:11.542 "listen_address": { 00:13:11.542 "adrfam": "IPv4", 00:13:11.542 "traddr": "10.0.0.2", 00:13:11.542 "trsvcid": "4420", 00:13:11.542 "trtype": "TCP" 00:13:11.542 }, 00:13:11.542 "peer_address": { 00:13:11.542 "adrfam": "IPv4", 00:13:11.542 "traddr": "10.0.0.1", 00:13:11.542 "trsvcid": "60194", 00:13:11.542 "trtype": "TCP" 00:13:11.542 }, 00:13:11.542 "qid": 0, 00:13:11.542 "state": "enabled", 00:13:11.542 "thread": "nvmf_tgt_poll_group_000" 00:13:11.542 } 00:13:11.542 ]' 00:13:11.542 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.543 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.543 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.543 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:11.543 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.543 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.543 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.543 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.801 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:13:12.366 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.366 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:12.366 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.366 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.366 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.366 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.366 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:12.366 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.624 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.189 00:13:13.189 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.189 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.189 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.446 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.446 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.446 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.446 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.446 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.447 { 00:13:13.447 "auth": { 00:13:13.447 "dhgroup": "ffdhe4096", 00:13:13.447 "digest": "sha512", 00:13:13.447 "state": "completed" 00:13:13.447 }, 00:13:13.447 "cntlid": 127, 00:13:13.447 "listen_address": { 00:13:13.447 "adrfam": "IPv4", 00:13:13.447 "traddr": "10.0.0.2", 00:13:13.447 "trsvcid": "4420", 00:13:13.447 "trtype": "TCP" 00:13:13.447 }, 00:13:13.447 "peer_address": { 00:13:13.447 "adrfam": "IPv4", 00:13:13.447 "traddr": "10.0.0.1", 00:13:13.447 "trsvcid": "60214", 00:13:13.447 "trtype": "TCP" 00:13:13.447 }, 00:13:13.447 "qid": 0, 00:13:13.447 "state": "enabled", 00:13:13.447 "thread": "nvmf_tgt_poll_group_000" 00:13:13.447 } 00:13:13.447 ]' 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.447 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.704 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:14.637 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.894 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.151 00:13:15.408 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.408 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.408 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.667 { 00:13:15.667 "auth": { 00:13:15.667 "dhgroup": "ffdhe6144", 00:13:15.667 "digest": "sha512", 00:13:15.667 "state": "completed" 00:13:15.667 }, 00:13:15.667 "cntlid": 129, 00:13:15.667 "listen_address": { 00:13:15.667 "adrfam": "IPv4", 00:13:15.667 "traddr": "10.0.0.2", 00:13:15.667 "trsvcid": "4420", 00:13:15.667 "trtype": "TCP" 00:13:15.667 }, 00:13:15.667 "peer_address": { 00:13:15.667 "adrfam": "IPv4", 00:13:15.667 "traddr": "10.0.0.1", 00:13:15.667 "trsvcid": "60232", 00:13:15.667 "trtype": "TCP" 00:13:15.667 }, 00:13:15.667 "qid": 0, 00:13:15.667 "state": "enabled", 00:13:15.667 "thread": "nvmf_tgt_poll_group_000" 00:13:15.667 } 00:13:15.667 ]' 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.667 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.232 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:13:16.795 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.795 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:16.795 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.795 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.795 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.795 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.795 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:16.795 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:17.052 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:17.052 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.052 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.053 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.616 00:13:17.616 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.616 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.616 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.874 { 00:13:17.874 "auth": { 00:13:17.874 "dhgroup": "ffdhe6144", 00:13:17.874 "digest": "sha512", 00:13:17.874 "state": "completed" 00:13:17.874 }, 00:13:17.874 "cntlid": 131, 00:13:17.874 "listen_address": { 00:13:17.874 "adrfam": "IPv4", 00:13:17.874 "traddr": "10.0.0.2", 00:13:17.874 "trsvcid": "4420", 00:13:17.874 "trtype": "TCP" 00:13:17.874 }, 00:13:17.874 "peer_address": { 00:13:17.874 "adrfam": "IPv4", 00:13:17.874 "traddr": "10.0.0.1", 00:13:17.874 "trsvcid": "43220", 00:13:17.874 "trtype": "TCP" 00:13:17.874 }, 00:13:17.874 "qid": 0, 00:13:17.874 "state": "enabled", 00:13:17.874 "thread": "nvmf_tgt_poll_group_000" 00:13:17.874 } 00:13:17.874 ]' 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.874 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.131 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:13:19.078 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.078 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:19.078 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.078 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.078 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.078 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.078 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:19.078 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.078 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.646 00:13:19.646 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.646 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.646 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.905 { 00:13:19.905 "auth": { 00:13:19.905 "dhgroup": "ffdhe6144", 00:13:19.905 "digest": "sha512", 00:13:19.905 "state": "completed" 00:13:19.905 }, 00:13:19.905 "cntlid": 133, 00:13:19.905 "listen_address": { 00:13:19.905 "adrfam": "IPv4", 00:13:19.905 "traddr": "10.0.0.2", 00:13:19.905 "trsvcid": "4420", 00:13:19.905 "trtype": "TCP" 00:13:19.905 }, 00:13:19.905 "peer_address": { 00:13:19.905 "adrfam": "IPv4", 00:13:19.905 "traddr": "10.0.0.1", 00:13:19.905 "trsvcid": "43242", 00:13:19.905 "trtype": "TCP" 00:13:19.905 }, 00:13:19.905 "qid": 0, 00:13:19.905 "state": "enabled", 00:13:19.905 "thread": "nvmf_tgt_poll_group_000" 00:13:19.905 } 00:13:19.905 ]' 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:19.905 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.164 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.164 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.164 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.424 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:13:20.990 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.990 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:20.990 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.990 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.990 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.990 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.990 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:20.990 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:21.249 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:21.816 00:13:21.816 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.816 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.816 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.075 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.075 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.075 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.075 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.075 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.075 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.075 { 00:13:22.075 "auth": { 00:13:22.075 "dhgroup": "ffdhe6144", 00:13:22.075 "digest": "sha512", 00:13:22.075 "state": "completed" 00:13:22.075 }, 00:13:22.075 "cntlid": 135, 00:13:22.075 "listen_address": { 00:13:22.075 "adrfam": "IPv4", 00:13:22.075 "traddr": "10.0.0.2", 00:13:22.075 "trsvcid": "4420", 00:13:22.075 "trtype": "TCP" 00:13:22.075 }, 00:13:22.075 "peer_address": { 00:13:22.075 "adrfam": "IPv4", 00:13:22.075 "traddr": "10.0.0.1", 00:13:22.075 "trsvcid": "43272", 00:13:22.075 "trtype": "TCP" 00:13:22.075 }, 00:13:22.075 "qid": 0, 00:13:22.075 "state": "enabled", 00:13:22.075 "thread": "nvmf_tgt_poll_group_000" 00:13:22.075 } 00:13:22.075 ]' 00:13:22.076 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.076 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.076 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.076 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:22.076 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.076 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.076 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.076 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.335 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:23.271 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.271 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.205 00:13:24.205 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.205 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.205 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.205 { 00:13:24.205 "auth": { 00:13:24.205 "dhgroup": "ffdhe8192", 00:13:24.205 "digest": "sha512", 00:13:24.205 "state": "completed" 00:13:24.205 }, 00:13:24.205 "cntlid": 137, 00:13:24.205 "listen_address": { 00:13:24.205 "adrfam": "IPv4", 00:13:24.205 "traddr": "10.0.0.2", 00:13:24.205 "trsvcid": "4420", 00:13:24.205 "trtype": "TCP" 00:13:24.205 }, 00:13:24.205 "peer_address": { 00:13:24.205 "adrfam": "IPv4", 00:13:24.205 "traddr": "10.0.0.1", 00:13:24.205 "trsvcid": "43296", 00:13:24.205 "trtype": "TCP" 00:13:24.205 }, 00:13:24.205 "qid": 0, 00:13:24.205 "state": "enabled", 00:13:24.205 "thread": "nvmf_tgt_poll_group_000" 00:13:24.205 } 00:13:24.205 ]' 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.205 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.463 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:24.463 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.463 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.463 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.463 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.722 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:13:25.286 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.286 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:25.286 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.286 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.286 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.286 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.286 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.286 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.852 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.422 00:13:26.422 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.422 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.422 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.686 { 00:13:26.686 "auth": { 00:13:26.686 "dhgroup": "ffdhe8192", 00:13:26.686 "digest": "sha512", 00:13:26.686 "state": "completed" 00:13:26.686 }, 00:13:26.686 "cntlid": 139, 00:13:26.686 "listen_address": { 00:13:26.686 "adrfam": "IPv4", 00:13:26.686 "traddr": "10.0.0.2", 00:13:26.686 "trsvcid": "4420", 00:13:26.686 "trtype": "TCP" 00:13:26.686 }, 00:13:26.686 "peer_address": { 00:13:26.686 "adrfam": "IPv4", 00:13:26.686 "traddr": "10.0.0.1", 00:13:26.686 "trsvcid": "53630", 00:13:26.686 "trtype": "TCP" 00:13:26.686 }, 00:13:26.686 "qid": 0, 00:13:26.686 "state": "enabled", 00:13:26.686 "thread": "nvmf_tgt_poll_group_000" 00:13:26.686 } 00:13:26.686 ]' 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.686 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.944 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:26.944 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.944 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.944 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.944 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.200 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:01:NDk5MGI4NmM4ZjkzOWZmNWRkNTA3MjdkYzFlNTYwZDZeUbDP: --dhchap-ctrl-secret DHHC-1:02:YzgwZDU3YTk3ZThiNGQ0YzRjZDQ5ZWZkNzEzZDYwMmRhZDcwNTc0NmE1YTk3NmU41ILC8Q==: 00:13:27.765 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.765 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:27.765 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.765 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.765 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.765 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.765 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:27.765 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.023 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.589 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.848 { 00:13:28.848 "auth": { 00:13:28.848 "dhgroup": "ffdhe8192", 00:13:28.848 "digest": "sha512", 00:13:28.848 "state": "completed" 00:13:28.848 }, 00:13:28.848 "cntlid": 141, 00:13:28.848 "listen_address": { 00:13:28.848 "adrfam": "IPv4", 00:13:28.848 "traddr": "10.0.0.2", 00:13:28.848 "trsvcid": "4420", 00:13:28.848 "trtype": "TCP" 00:13:28.848 }, 00:13:28.848 "peer_address": { 00:13:28.848 "adrfam": "IPv4", 00:13:28.848 "traddr": "10.0.0.1", 00:13:28.848 "trsvcid": "53650", 00:13:28.848 "trtype": "TCP" 00:13:28.848 }, 00:13:28.848 "qid": 0, 00:13:28.848 "state": "enabled", 00:13:28.848 "thread": "nvmf_tgt_poll_group_000" 00:13:28.848 } 00:13:28.848 ]' 00:13:28.848 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.106 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.106 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.106 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.106 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.106 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.106 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.106 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.364 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:02:NDk0YTZlZDYzYTEwZjc4Njg4ZTA4ODU0MTE4N2E0YmM2ZTQ3MzExMzg3ZmFlNmM3HQ3/zg==: --dhchap-ctrl-secret DHHC-1:01:NjlkYWM5MjJiNzZiOTNlMzQyNTVhY2Q1MDM3YjFkMGLmF0sB: 00:13:29.932 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.932 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:29.932 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.932 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.932 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.932 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.932 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:29.932 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.190 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.804 00:13:30.804 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.804 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.804 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.062 { 00:13:31.062 "auth": { 00:13:31.062 "dhgroup": "ffdhe8192", 00:13:31.062 "digest": "sha512", 00:13:31.062 "state": "completed" 00:13:31.062 }, 00:13:31.062 "cntlid": 143, 00:13:31.062 "listen_address": { 00:13:31.062 "adrfam": "IPv4", 00:13:31.062 "traddr": "10.0.0.2", 00:13:31.062 "trsvcid": "4420", 00:13:31.062 "trtype": "TCP" 00:13:31.062 }, 00:13:31.062 "peer_address": { 00:13:31.062 "adrfam": "IPv4", 00:13:31.062 "traddr": "10.0.0.1", 00:13:31.062 "trsvcid": "53672", 00:13:31.062 "trtype": "TCP" 00:13:31.062 }, 00:13:31.062 "qid": 0, 00:13:31.062 "state": "enabled", 00:13:31.062 "thread": "nvmf_tgt_poll_group_000" 00:13:31.062 } 00:13:31.062 ]' 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.062 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.062 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:31.062 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.319 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.319 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.319 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.577 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:32.146 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.733 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.314 00:13:33.314 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:33.314 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.314 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.574 { 00:13:33.574 "auth": { 00:13:33.574 "dhgroup": "ffdhe8192", 00:13:33.574 "digest": "sha512", 00:13:33.574 "state": "completed" 00:13:33.574 }, 00:13:33.574 "cntlid": 145, 00:13:33.574 "listen_address": { 00:13:33.574 "adrfam": "IPv4", 00:13:33.574 "traddr": "10.0.0.2", 00:13:33.574 "trsvcid": "4420", 00:13:33.574 "trtype": "TCP" 00:13:33.574 }, 00:13:33.574 "peer_address": { 00:13:33.574 "adrfam": "IPv4", 00:13:33.574 "traddr": "10.0.0.1", 00:13:33.574 "trsvcid": "53690", 00:13:33.574 "trtype": "TCP" 00:13:33.574 }, 00:13:33.574 "qid": 0, 00:13:33.574 "state": "enabled", 00:13:33.574 "thread": "nvmf_tgt_poll_group_000" 00:13:33.574 } 00:13:33.574 ]' 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.574 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.833 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:00:YTNkOTA5MGE1MzE2MDQwYzM0MTlkMTM1YTE3ZDk1NjFmZTQ0MjYzYTkxYzgwYzA4/DmDlw==: --dhchap-ctrl-secret DHHC-1:03:NDRiM2Q1MDU4NGY0NTU4M2Q1NTgwOWFhYmJkMzAyNTkyOTNhY2ZkOWJlNDRhOTZlMGQxZWI3ZGU0ZDA4ZDQ3MI2Cnl4=: 00:13:34.770 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.770 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:34.771 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:35.369 2024/07/24 18:00:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:35.369 request: 00:13:35.369 { 00:13:35.369 "method": "bdev_nvme_attach_controller", 00:13:35.369 "params": { 00:13:35.369 "name": "nvme0", 00:13:35.369 "trtype": "tcp", 00:13:35.369 "traddr": "10.0.0.2", 00:13:35.369 "adrfam": "ipv4", 00:13:35.369 "trsvcid": "4420", 00:13:35.369 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:35.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee", 00:13:35.369 "prchk_reftag": false, 00:13:35.369 "prchk_guard": false, 00:13:35.369 "hdgst": false, 00:13:35.369 "ddgst": false, 00:13:35.369 "dhchap_key": "key2" 00:13:35.369 } 00:13:35.369 } 00:13:35.369 Got JSON-RPC error response 00:13:35.369 GoRPCClient: error on JSON-RPC call 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.369 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:35.370 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:36.007 2024/07/24 18:00:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:36.007 request: 00:13:36.007 { 00:13:36.007 "method": "bdev_nvme_attach_controller", 00:13:36.007 "params": { 00:13:36.007 "name": "nvme0", 00:13:36.007 "trtype": "tcp", 00:13:36.007 "traddr": "10.0.0.2", 00:13:36.007 "adrfam": "ipv4", 00:13:36.007 "trsvcid": "4420", 00:13:36.007 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:36.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee", 00:13:36.007 "prchk_reftag": false, 00:13:36.007 "prchk_guard": false, 00:13:36.007 "hdgst": false, 00:13:36.007 "ddgst": false, 00:13:36.007 "dhchap_key": "key1", 00:13:36.007 "dhchap_ctrlr_key": "ckey2" 00:13:36.007 } 00:13:36.007 } 00:13:36.007 Got JSON-RPC error response 00:13:36.007 GoRPCClient: error on JSON-RPC call 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key1 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:36.007 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.008 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.008 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.574 2024/07/24 18:00:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:36.574 request: 00:13:36.574 { 00:13:36.574 "method": "bdev_nvme_attach_controller", 00:13:36.574 "params": { 00:13:36.574 "name": "nvme0", 00:13:36.574 "trtype": "tcp", 00:13:36.574 "traddr": "10.0.0.2", 00:13:36.574 "adrfam": "ipv4", 00:13:36.574 "trsvcid": "4420", 00:13:36.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:36.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee", 00:13:36.574 "prchk_reftag": false, 00:13:36.574 "prchk_guard": false, 00:13:36.574 "hdgst": false, 00:13:36.574 "ddgst": false, 00:13:36.574 "dhchap_key": "key1", 00:13:36.574 "dhchap_ctrlr_key": "ckey1" 00:13:36.574 } 00:13:36.574 } 00:13:36.574 Got JSON-RPC error response 00:13:36.574 GoRPCClient: error on JSON-RPC call 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77092 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 77092 ']' 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 77092 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77092 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.574 killing process with pid 77092 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77092' 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 77092 00:13:36.574 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 77092 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=81961 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 81961 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81961 ']' 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.833 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.769 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.769 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:37.769 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.769 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.769 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.769 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.769 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:38.026 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 81961 00:13:38.026 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81961 ']' 00:13:38.026 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.026 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.026 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.026 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.026 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.285 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:39.219 00:13:39.219 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.219 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.219 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.219 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.219 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.219 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.219 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.219 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.219 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.219 { 00:13:39.219 "auth": { 00:13:39.219 "dhgroup": "ffdhe8192", 00:13:39.219 "digest": "sha512", 00:13:39.219 "state": "completed" 00:13:39.219 }, 00:13:39.219 "cntlid": 1, 00:13:39.219 "listen_address": { 00:13:39.219 "adrfam": "IPv4", 00:13:39.219 "traddr": "10.0.0.2", 00:13:39.219 "trsvcid": "4420", 00:13:39.219 "trtype": "TCP" 00:13:39.219 }, 00:13:39.219 "peer_address": { 00:13:39.219 "adrfam": "IPv4", 00:13:39.219 "traddr": "10.0.0.1", 00:13:39.219 "trsvcid": "41266", 00:13:39.219 "trtype": "TCP" 00:13:39.219 }, 00:13:39.219 "qid": 0, 00:13:39.219 "state": "enabled", 00:13:39.219 "thread": "nvmf_tgt_poll_group_000" 00:13:39.219 } 00:13:39.219 ]' 00:13:39.219 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.476 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.476 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.476 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.476 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.476 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.476 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.476 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.769 18:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-secret DHHC-1:03:OTc5ZmNkM2M2YmRiZGQyYWZlNDYzYTIyN2Q4ZThhYmZkNTBjM2FhOWJhNTBmNTVhZmM3NzA1NTYwODViZTdmOKIxK8Y=: 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --dhchap-key key3 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:40.707 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:40.964 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:41.223 2024/07/24 18:00:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:41.223 request: 00:13:41.223 { 00:13:41.223 "method": "bdev_nvme_attach_controller", 00:13:41.223 "params": { 00:13:41.223 "name": "nvme0", 00:13:41.223 "trtype": "tcp", 00:13:41.223 "traddr": "10.0.0.2", 00:13:41.223 "adrfam": "ipv4", 00:13:41.223 "trsvcid": "4420", 00:13:41.223 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:41.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee", 00:13:41.223 "prchk_reftag": false, 00:13:41.223 "prchk_guard": false, 00:13:41.223 "hdgst": false, 00:13:41.223 "ddgst": false, 00:13:41.223 "dhchap_key": "key3" 00:13:41.223 } 00:13:41.223 } 00:13:41.223 Got JSON-RPC error response 00:13:41.223 GoRPCClient: error on JSON-RPC call 00:13:41.223 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:41.223 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.223 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.223 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.223 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:41.223 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:41.223 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:41.223 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:41.487 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:41.746 2024/07/24 18:00:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:41.746 request: 00:13:41.746 { 00:13:41.746 "method": "bdev_nvme_attach_controller", 00:13:41.746 "params": { 00:13:41.746 "name": "nvme0", 00:13:41.746 "trtype": "tcp", 00:13:41.746 "traddr": "10.0.0.2", 00:13:41.746 "adrfam": "ipv4", 00:13:41.746 "trsvcid": "4420", 00:13:41.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:41.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee", 00:13:41.746 "prchk_reftag": false, 00:13:41.746 "prchk_guard": false, 00:13:41.746 "hdgst": false, 00:13:41.746 "ddgst": false, 00:13:41.746 "dhchap_key": "key3" 00:13:41.746 } 00:13:41.746 } 00:13:41.746 Got JSON-RPC error response 00:13:41.746 GoRPCClient: error on JSON-RPC call 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:41.746 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.021 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:42.022 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:42.279 2024/07/24 18:00:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:42.279 request: 00:13:42.279 { 00:13:42.279 "method": "bdev_nvme_attach_controller", 00:13:42.279 "params": { 00:13:42.279 "name": "nvme0", 00:13:42.279 "trtype": "tcp", 00:13:42.279 "traddr": "10.0.0.2", 00:13:42.279 "adrfam": "ipv4", 00:13:42.279 "trsvcid": "4420", 00:13:42.279 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:42.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee", 00:13:42.279 "prchk_reftag": false, 00:13:42.279 "prchk_guard": false, 00:13:42.279 "hdgst": false, 00:13:42.279 "ddgst": false, 00:13:42.279 "dhchap_key": "key0", 00:13:42.279 "dhchap_ctrlr_key": "key1" 00:13:42.279 } 00:13:42.279 } 00:13:42.279 Got JSON-RPC error response 00:13:42.279 GoRPCClient: error on JSON-RPC call 00:13:42.279 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:42.279 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.279 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.279 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.279 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:42.279 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:42.537 00:13:42.537 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:42.537 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.537 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:42.796 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.796 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.796 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77136 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 77136 ']' 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 77136 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77136 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:43.054 killing process with pid 77136 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77136' 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 77136 00:13:43.054 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 77136 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.620 rmmod nvme_tcp 00:13:43.620 rmmod nvme_fabrics 00:13:43.620 rmmod nvme_keyring 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 81961 ']' 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 81961 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 81961 ']' 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 81961 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81961 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.620 killing process with pid 81961 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81961' 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 81961 00:13:43.620 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 81961 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FwG /tmp/spdk.key-sha256.Gd6 /tmp/spdk.key-sha384.IjN /tmp/spdk.key-sha512.Xwj /tmp/spdk.key-sha512.McJ /tmp/spdk.key-sha384.Lhr /tmp/spdk.key-sha256.LRv '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:43.877 00:13:43.877 real 2m49.273s 00:13:43.877 user 6m43.912s 00:13:43.877 sys 0m29.022s 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.877 ************************************ 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.877 END TEST nvmf_auth_target 00:13:43.877 ************************************ 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.877 ************************************ 00:13:43.877 START TEST nvmf_bdevio_no_huge 00:13:43.877 ************************************ 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:43.877 * Looking for test storage... 00:13:43.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.877 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.136 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:44.136 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:44.137 Cannot find device "nvmf_tgt_br" 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.137 Cannot find device "nvmf_tgt_br2" 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:44.137 Cannot find device "nvmf_tgt_br" 00:13:44.137 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:44.138 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:44.138 Cannot find device "nvmf_tgt_br2" 00:13:44.138 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:44.138 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:44.138 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:44.138 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:44.396 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:44.396 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:44.396 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:44.396 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:44.396 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.396 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.396 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:44.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:13:44.397 00:13:44.397 --- 10.0.0.2 ping statistics --- 00:13:44.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.397 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:44.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:13:44.397 00:13:44.397 --- 10.0.0.3 ping statistics --- 00:13:44.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.397 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:13:44.397 00:13:44.397 --- 10.0.0.1 ping statistics --- 00:13:44.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.397 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=82367 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 82367 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82367 ']' 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.397 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.397 [2024-07-24 18:00:51.320974] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:13:44.397 [2024-07-24 18:00:51.321090] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:44.657 [2024-07-24 18:00:51.472038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.657 [2024-07-24 18:00:51.621350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.657 [2024-07-24 18:00:51.621435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.657 [2024-07-24 18:00:51.621449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.657 [2024-07-24 18:00:51.621461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.657 [2024-07-24 18:00:51.621472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.657 [2024-07-24 18:00:51.621603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:44.657 [2024-07-24 18:00:51.621709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:44.657 [2024-07-24 18:00:51.621787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:44.657 [2024-07-24 18:00:51.621796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.590 [2024-07-24 18:00:52.522631] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.590 Malloc0 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.590 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.590 [2024-07-24 18:00:52.562857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:45.915 { 00:13:45.915 "params": { 00:13:45.915 "name": "Nvme$subsystem", 00:13:45.915 "trtype": "$TEST_TRANSPORT", 00:13:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:45.915 "adrfam": "ipv4", 00:13:45.915 "trsvcid": "$NVMF_PORT", 00:13:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:45.915 "hdgst": ${hdgst:-false}, 00:13:45.915 "ddgst": ${ddgst:-false} 00:13:45.915 }, 00:13:45.915 "method": "bdev_nvme_attach_controller" 00:13:45.915 } 00:13:45.915 EOF 00:13:45.915 )") 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:45.915 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:45.915 "params": { 00:13:45.915 "name": "Nvme1", 00:13:45.915 "trtype": "tcp", 00:13:45.915 "traddr": "10.0.0.2", 00:13:45.916 "adrfam": "ipv4", 00:13:45.916 "trsvcid": "4420", 00:13:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.916 "hdgst": false, 00:13:45.916 "ddgst": false 00:13:45.916 }, 00:13:45.916 "method": "bdev_nvme_attach_controller" 00:13:45.916 }' 00:13:45.916 [2024-07-24 18:00:52.632728] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:13:45.916 [2024-07-24 18:00:52.632879] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82428 ] 00:13:45.916 [2024-07-24 18:00:52.795965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:46.173 [2024-07-24 18:00:52.992419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.173 [2024-07-24 18:00:52.992516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.173 [2024-07-24 18:00:52.992522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.431 I/O targets: 00:13:46.431 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:46.431 00:13:46.431 00:13:46.431 CUnit - A unit testing framework for C - Version 2.1-3 00:13:46.431 http://cunit.sourceforge.net/ 00:13:46.431 00:13:46.431 00:13:46.431 Suite: bdevio tests on: Nvme1n1 00:13:46.431 Test: blockdev write read block ...passed 00:13:46.431 Test: blockdev write zeroes read block ...passed 00:13:46.431 Test: blockdev write zeroes read no split ...passed 00:13:46.431 Test: blockdev write zeroes read split ...passed 00:13:46.431 Test: blockdev write zeroes read split partial ...passed 00:13:46.431 Test: blockdev reset ...[2024-07-24 18:00:53.354486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:46.431 [2024-07-24 18:00:53.354647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ba460 (9): Bad file descriptor 00:13:46.431 [2024-07-24 18:00:53.375207] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:46.431 passed 00:13:46.431 Test: blockdev write read 8 blocks ...passed 00:13:46.431 Test: blockdev write read size > 128k ...passed 00:13:46.431 Test: blockdev write read invalid size ...passed 00:13:46.689 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:46.689 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:46.689 Test: blockdev write read max offset ...passed 00:13:46.689 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:46.689 Test: blockdev writev readv 8 blocks ...passed 00:13:46.689 Test: blockdev writev readv 30 x 1block ...passed 00:13:46.689 Test: blockdev writev readv block ...passed 00:13:46.689 Test: blockdev writev readv size > 128k ...passed 00:13:46.689 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:46.689 Test: blockdev comparev and writev ...[2024-07-24 18:00:53.554783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.689 [2024-07-24 18:00:53.554860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.554889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.689 [2024-07-24 18:00:53.554909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.555618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.689 [2024-07-24 18:00:53.555668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.555697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.689 [2024-07-24 18:00:53.555716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.556369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.689 [2024-07-24 18:00:53.556411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.556439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.689 [2024-07-24 18:00:53.556458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.557057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.689 [2024-07-24 18:00:53.557103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.557131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.689 [2024-07-24 18:00:53.557151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:46.689 passed 00:13:46.689 Test: blockdev nvme passthru rw ...passed 00:13:46.689 Test: blockdev nvme passthru vendor specific ...[2024-07-24 18:00:53.642212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.689 [2024-07-24 18:00:53.642305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.642848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.689 [2024-07-24 18:00:53.642897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.643356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.689 [2024-07-24 18:00:53.643402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:46.689 [2024-07-24 18:00:53.643796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.689 [2024-07-24 18:00:53.643839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:46.689 passed 00:13:46.689 Test: blockdev nvme admin passthru ...passed 00:13:46.947 Test: blockdev copy ...passed 00:13:46.947 00:13:46.948 Run Summary: Type Total Ran Passed Failed Inactive 00:13:46.948 suites 1 1 n/a 0 0 00:13:46.948 tests 23 23 23 0 0 00:13:46.948 asserts 152 152 152 0 n/a 00:13:46.948 00:13:46.948 Elapsed time = 0.950 seconds 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.514 rmmod nvme_tcp 00:13:47.514 rmmod nvme_fabrics 00:13:47.514 rmmod nvme_keyring 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 82367 ']' 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 82367 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82367 ']' 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82367 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82367 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:47.514 killing process with pid 82367 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82367' 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82367 00:13:47.514 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82367 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:48.085 00:13:48.085 real 0m4.058s 00:13:48.085 user 0m14.831s 00:13:48.085 sys 0m1.660s 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:48.085 ************************************ 00:13:48.085 END TEST nvmf_bdevio_no_huge 00:13:48.085 ************************************ 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.085 ************************************ 00:13:48.085 START TEST nvmf_tls 00:13:48.085 ************************************ 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:48.085 * Looking for test storage... 00:13:48.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:48.085 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.086 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:48.086 Cannot find device "nvmf_tgt_br" 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.086 Cannot find device "nvmf_tgt_br2" 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:48.086 Cannot find device "nvmf_tgt_br" 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:48.086 Cannot find device "nvmf_tgt_br2" 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:48.086 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:48.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:13:48.345 00:13:48.345 --- 10.0.0.2 ping statistics --- 00:13:48.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.345 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:48.345 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:48.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:48.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:13:48.602 00:13:48.602 --- 10.0.0.3 ping statistics --- 00:13:48.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.602 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:48.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:48.602 00:13:48.602 --- 10.0.0.1 ping statistics --- 00:13:48.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.602 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=82616 00:13:48.602 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:48.603 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 82616 00:13:48.603 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82616 ']' 00:13:48.603 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.603 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.603 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.603 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.603 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.603 [2024-07-24 18:00:55.409409] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:13:48.603 [2024-07-24 18:00:55.409979] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.603 [2024-07-24 18:00:55.549218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.859 [2024-07-24 18:00:55.657861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.859 [2024-07-24 18:00:55.657918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.859 [2024-07-24 18:00:55.657929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.859 [2024-07-24 18:00:55.657940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.859 [2024-07-24 18:00:55.657948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.859 [2024-07-24 18:00:55.657982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.424 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.424 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:49.424 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.424 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.424 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.424 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.424 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:49.424 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:50.016 true 00:13:50.016 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:50.016 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:50.016 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:50.016 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:50.016 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:50.299 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:50.299 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:50.557 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:50.557 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:50.557 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:50.824 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:50.824 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:51.088 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:51.088 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:51.088 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:51.088 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:51.653 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:51.653 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:51.653 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:51.653 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:51.653 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:51.910 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:51.910 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:51.910 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:52.169 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:52.169 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:52.428 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Ib7gIzbgTM 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.XCsBoiBMSN 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Ib7gIzbgTM 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.XCsBoiBMSN 00:13:52.687 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:52.945 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:53.205 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Ib7gIzbgTM 00:13:53.205 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ib7gIzbgTM 00:13:53.205 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:53.463 [2024-07-24 18:01:00.340820] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.463 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:53.720 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:53.978 [2024-07-24 18:01:00.872933] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:53.978 [2024-07-24 18:01:00.873161] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.978 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:54.242 malloc0 00:13:54.242 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:54.500 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ib7gIzbgTM 00:13:55.064 [2024-07-24 18:01:01.735659] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:55.064 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Ib7gIzbgTM 00:14:05.100 Initializing NVMe Controllers 00:14:05.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:05.100 Initialization complete. Launching workers. 00:14:05.100 ======================================================== 00:14:05.100 Latency(us) 00:14:05.100 Device Information : IOPS MiB/s Average min max 00:14:05.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12407.28 48.47 5158.84 1635.20 8525.86 00:14:05.100 ======================================================== 00:14:05.100 Total : 12407.28 48.47 5158.84 1635.20 8525.86 00:14:05.100 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ib7gIzbgTM 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ib7gIzbgTM' 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82985 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82985 /var/tmp/bdevperf.sock 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82985 ']' 00:14:05.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.100 18:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:05.100 [2024-07-24 18:01:12.027922] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:05.100 [2024-07-24 18:01:12.028026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82985 ] 00:14:05.359 [2024-07-24 18:01:12.169651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.359 [2024-07-24 18:01:12.291586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.303 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.303 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:06.303 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ib7gIzbgTM 00:14:06.303 [2024-07-24 18:01:13.244808] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:06.303 [2024-07-24 18:01:13.244914] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:06.560 TLSTESTn1 00:14:06.560 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:06.560 Running I/O for 10 seconds... 00:14:16.531 00:14:16.531 Latency(us) 00:14:16.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.531 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:16.531 Verification LBA range: start 0x0 length 0x2000 00:14:16.531 TLSTESTn1 : 10.01 4379.12 17.11 0.00 0.00 29184.93 5523.75 47185.92 00:14:16.531 =================================================================================================================== 00:14:16.531 Total : 4379.12 17.11 0.00 0.00 29184.93 5523.75 47185.92 00:14:16.531 0 00:14:16.531 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:16.531 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 82985 00:14:16.531 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82985 ']' 00:14:16.531 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82985 00:14:16.531 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:16.531 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.531 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82985 00:14:16.790 killing process with pid 82985 00:14:16.790 Received shutdown signal, test time was about 10.000000 seconds 00:14:16.790 00:14:16.790 Latency(us) 00:14:16.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.790 =================================================================================================================== 00:14:16.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.790 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:16.790 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:16.790 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82985' 00:14:16.790 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82985 00:14:16.790 [2024-07-24 18:01:23.524690] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:16.790 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82985 00:14:16.790 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCsBoiBMSN 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCsBoiBMSN 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCsBoiBMSN 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XCsBoiBMSN' 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83133 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83133 /var/tmp/bdevperf.sock 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83133 ']' 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.791 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.048 [2024-07-24 18:01:23.820879] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:17.048 [2024-07-24 18:01:23.821877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83133 ] 00:14:17.048 [2024-07-24 18:01:23.979482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.306 [2024-07-24 18:01:24.092790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.872 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.872 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:17.872 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XCsBoiBMSN 00:14:18.136 [2024-07-24 18:01:25.084609] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.136 [2024-07-24 18:01:25.084737] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:18.136 [2024-07-24 18:01:25.090640] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:18.136 [2024-07-24 18:01:25.091501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd4ca0 (107): Transport endpoint is not connected 00:14:18.136 [2024-07-24 18:01:25.092482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd4ca0 (9): Bad file descriptor 00:14:18.136 [2024-07-24 18:01:25.093478] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:18.136 [2024-07-24 18:01:25.093511] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:18.136 [2024-07-24 18:01:25.093530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:18.136 2024/07/24 18:01:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.XCsBoiBMSN subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:18.136 request: 00:14:18.136 { 00:14:18.136 "method": "bdev_nvme_attach_controller", 00:14:18.136 "params": { 00:14:18.136 "name": "TLSTEST", 00:14:18.136 "trtype": "tcp", 00:14:18.136 "traddr": "10.0.0.2", 00:14:18.136 "adrfam": "ipv4", 00:14:18.136 "trsvcid": "4420", 00:14:18.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.136 "prchk_reftag": false, 00:14:18.136 "prchk_guard": false, 00:14:18.136 "hdgst": false, 00:14:18.136 "ddgst": false, 00:14:18.136 "psk": "/tmp/tmp.XCsBoiBMSN" 00:14:18.136 } 00:14:18.136 } 00:14:18.136 Got JSON-RPC error response 00:14:18.136 GoRPCClient: error on JSON-RPC call 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83133 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83133 ']' 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83133 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83133 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83133' 00:14:18.394 killing process with pid 83133 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83133 00:14:18.394 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.394 00:14:18.394 Latency(us) 00:14:18.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.394 =================================================================================================================== 00:14:18.394 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83133 00:14:18.394 [2024-07-24 18:01:25.152202] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ib7gIzbgTM 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ib7gIzbgTM 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ib7gIzbgTM 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ib7gIzbgTM' 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83180 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83180 /var/tmp/bdevperf.sock 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83180 ']' 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.394 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.652 [2024-07-24 18:01:25.426999] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:18.652 [2024-07-24 18:01:25.427133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83180 ] 00:14:18.652 [2024-07-24 18:01:25.572353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.910 [2024-07-24 18:01:25.681929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.495 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.496 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:19.496 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Ib7gIzbgTM 00:14:19.755 [2024-07-24 18:01:26.576640] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.755 [2024-07-24 18:01:26.576802] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:19.755 [2024-07-24 18:01:26.583663] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:19.755 [2024-07-24 18:01:26.583716] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:19.755 [2024-07-24 18:01:26.583773] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:19.755 [2024-07-24 18:01:26.584494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2122ca0 (107): Transport endpoint is not connected 00:14:19.755 [2024-07-24 18:01:26.585460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2122ca0 (9): Bad file descriptor 00:14:19.755 [2024-07-24 18:01:26.586453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:19.755 [2024-07-24 18:01:26.586503] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:19.755 [2024-07-24 18:01:26.586532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:19.755 2024/07/24 18:01:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.Ib7gIzbgTM subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:19.755 request: 00:14:19.755 { 00:14:19.755 "method": "bdev_nvme_attach_controller", 00:14:19.755 "params": { 00:14:19.755 "name": "TLSTEST", 00:14:19.755 "trtype": "tcp", 00:14:19.755 "traddr": "10.0.0.2", 00:14:19.755 "adrfam": "ipv4", 00:14:19.755 "trsvcid": "4420", 00:14:19.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.755 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:19.755 "prchk_reftag": false, 00:14:19.755 "prchk_guard": false, 00:14:19.755 "hdgst": false, 00:14:19.755 "ddgst": false, 00:14:19.755 "psk": "/tmp/tmp.Ib7gIzbgTM" 00:14:19.755 } 00:14:19.755 } 00:14:19.755 Got JSON-RPC error response 00:14:19.755 GoRPCClient: error on JSON-RPC call 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83180 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83180 ']' 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83180 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83180 00:14:19.755 killing process with pid 83180 00:14:19.755 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.755 00:14:19.755 Latency(us) 00:14:19.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.755 =================================================================================================================== 00:14:19.755 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83180' 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83180 00:14:19.755 [2024-07-24 18:01:26.641724] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:19.755 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83180 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ib7gIzbgTM 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ib7gIzbgTM 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:20.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ib7gIzbgTM 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ib7gIzbgTM' 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83220 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83220 /var/tmp/bdevperf.sock 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83220 ']' 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.014 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.015 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.015 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.015 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.015 [2024-07-24 18:01:26.901298] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:20.015 [2024-07-24 18:01:26.901433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83220 ] 00:14:20.273 [2024-07-24 18:01:27.045042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.273 [2024-07-24 18:01:27.174756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.208 18:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.208 18:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:21.208 18:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ib7gIzbgTM 00:14:21.208 [2024-07-24 18:01:28.088669] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:21.208 [2024-07-24 18:01:28.088791] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:21.208 [2024-07-24 18:01:28.094581] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:21.208 [2024-07-24 18:01:28.094656] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:21.208 [2024-07-24 18:01:28.094746] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:21.208 [2024-07-24 18:01:28.095316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba3ca0 (107): Transport endpoint is not connected 00:14:21.208 [2024-07-24 18:01:28.096274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba3ca0 (9): Bad file descriptor 00:14:21.208 [2024-07-24 18:01:28.097269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:21.208 [2024-07-24 18:01:28.097299] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:21.208 [2024-07-24 18:01:28.097315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:21.208 2024/07/24 18:01:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.Ib7gIzbgTM subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:21.208 request: 00:14:21.208 { 00:14:21.208 "method": "bdev_nvme_attach_controller", 00:14:21.208 "params": { 00:14:21.208 "name": "TLSTEST", 00:14:21.208 "trtype": "tcp", 00:14:21.208 "traddr": "10.0.0.2", 00:14:21.208 "adrfam": "ipv4", 00:14:21.208 "trsvcid": "4420", 00:14:21.208 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:21.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.208 "prchk_reftag": false, 00:14:21.208 "prchk_guard": false, 00:14:21.208 "hdgst": false, 00:14:21.208 "ddgst": false, 00:14:21.208 "psk": "/tmp/tmp.Ib7gIzbgTM" 00:14:21.208 } 00:14:21.208 } 00:14:21.208 Got JSON-RPC error response 00:14:21.208 GoRPCClient: error on JSON-RPC call 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83220 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83220 ']' 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83220 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83220 00:14:21.208 killing process with pid 83220 00:14:21.208 Received shutdown signal, test time was about 10.000000 seconds 00:14:21.208 00:14:21.208 Latency(us) 00:14:21.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.208 =================================================================================================================== 00:14:21.208 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83220' 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83220 00:14:21.208 [2024-07-24 18:01:28.155102] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:21.208 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83220 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:21.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83271 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83271 /var/tmp/bdevperf.sock 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83271 ']' 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.466 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 [2024-07-24 18:01:28.404909] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:21.466 [2024-07-24 18:01:28.405278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83271 ] 00:14:21.724 [2024-07-24 18:01:28.543882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.724 [2024-07-24 18:01:28.657832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.672 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.672 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:22.672 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:22.672 [2024-07-24 18:01:29.611316] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:22.672 [2024-07-24 18:01:29.613336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbef240 (9): Bad file descriptor 00:14:22.672 [2024-07-24 18:01:29.614474] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:22.672 [2024-07-24 18:01:29.614537] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:22.672 [2024-07-24 18:01:29.614569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:22.672 2024/07/24 18:01:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:22.672 request: 00:14:22.672 { 00:14:22.672 "method": "bdev_nvme_attach_controller", 00:14:22.672 "params": { 00:14:22.672 "name": "TLSTEST", 00:14:22.672 "trtype": "tcp", 00:14:22.672 "traddr": "10.0.0.2", 00:14:22.672 "adrfam": "ipv4", 00:14:22.672 "trsvcid": "4420", 00:14:22.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.672 "prchk_reftag": false, 00:14:22.672 "prchk_guard": false, 00:14:22.672 "hdgst": false, 00:14:22.672 "ddgst": false 00:14:22.672 } 00:14:22.672 } 00:14:22.672 Got JSON-RPC error response 00:14:22.672 GoRPCClient: error on JSON-RPC call 00:14:22.672 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83271 00:14:22.672 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83271 ']' 00:14:22.672 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83271 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83271 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:22.962 killing process with pid 83271 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83271' 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83271 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83271 00:14:22.962 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.962 00:14:22.962 Latency(us) 00:14:22.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.962 =================================================================================================================== 00:14:22.962 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 82616 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82616 ']' 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82616 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82616 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82616' 00:14:22.962 killing process with pid 82616 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82616 00:14:22.962 [2024-07-24 18:01:29.885849] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:22.962 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82616 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.q58C0ybhuD 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.q58C0ybhuD 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83321 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83321 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83321 ']' 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.221 18:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.480 [2024-07-24 18:01:30.236150] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:23.480 [2024-07-24 18:01:30.236282] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.480 [2024-07-24 18:01:30.383427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.738 [2024-07-24 18:01:30.505598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.738 [2024-07-24 18:01:30.505671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.738 [2024-07-24 18:01:30.505687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.738 [2024-07-24 18:01:30.505700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.738 [2024-07-24 18:01:30.505711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.738 [2024-07-24 18:01:30.505750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.q58C0ybhuD 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q58C0ybhuD 00:14:24.305 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:24.564 [2024-07-24 18:01:31.509677] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.564 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:24.823 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:25.081 [2024-07-24 18:01:32.009774] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.081 [2024-07-24 18:01:32.010338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.081 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:25.338 malloc0 00:14:25.338 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:25.597 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q58C0ybhuD 00:14:26.162 [2024-07-24 18:01:32.831663] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q58C0ybhuD 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.q58C0ybhuD' 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83430 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83430 /var/tmp/bdevperf.sock 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83430 ']' 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.162 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.162 [2024-07-24 18:01:32.895013] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:26.162 [2024-07-24 18:01:32.895557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83430 ] 00:14:26.162 [2024-07-24 18:01:33.028349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.419 [2024-07-24 18:01:33.165985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.073 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.073 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:27.073 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q58C0ybhuD 00:14:27.337 [2024-07-24 18:01:34.047699] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:27.337 [2024-07-24 18:01:34.047812] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:27.337 TLSTESTn1 00:14:27.337 18:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:27.337 Running I/O for 10 seconds... 00:14:37.344 00:14:37.344 Latency(us) 00:14:37.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.344 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:37.344 Verification LBA range: start 0x0 length 0x2000 00:14:37.344 TLSTESTn1 : 10.03 3998.64 15.62 0.00 0.00 31934.26 8051.57 40445.07 00:14:37.344 =================================================================================================================== 00:14:37.344 Total : 3998.64 15.62 0.00 0.00 31934.26 8051.57 40445.07 00:14:37.344 0 00:14:37.344 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.344 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 83430 00:14:37.344 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83430 ']' 00:14:37.344 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83430 00:14:37.344 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:37.344 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.344 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83430 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:37.603 killing process with pid 83430 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83430' 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83430 00:14:37.603 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.603 00:14:37.603 Latency(us) 00:14:37.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.603 =================================================================================================================== 00:14:37.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83430 00:14:37.603 [2024-07-24 18:01:44.330401] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.q58C0ybhuD 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q58C0ybhuD 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q58C0ybhuD 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q58C0ybhuD 00:14:37.603 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.q58C0ybhuD' 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83579 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83579 /var/tmp/bdevperf.sock 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83579 ']' 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.604 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.862 [2024-07-24 18:01:44.588505] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:37.862 [2024-07-24 18:01:44.588622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83579 ] 00:14:37.862 [2024-07-24 18:01:44.727398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.862 [2024-07-24 18:01:44.837612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.119 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.119 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:38.119 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q58C0ybhuD 00:14:38.378 [2024-07-24 18:01:45.174685] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.378 [2024-07-24 18:01:45.174769] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:38.378 [2024-07-24 18:01:45.174780] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.q58C0ybhuD 00:14:38.378 2024/07/24 18:01:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.q58C0ybhuD subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:14:38.378 request: 00:14:38.378 { 00:14:38.378 "method": "bdev_nvme_attach_controller", 00:14:38.378 "params": { 00:14:38.378 "name": "TLSTEST", 00:14:38.378 "trtype": "tcp", 00:14:38.378 "traddr": "10.0.0.2", 00:14:38.378 "adrfam": "ipv4", 00:14:38.378 "trsvcid": "4420", 00:14:38.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.378 "prchk_reftag": false, 00:14:38.378 "prchk_guard": false, 00:14:38.378 "hdgst": false, 00:14:38.378 "ddgst": false, 00:14:38.378 "psk": "/tmp/tmp.q58C0ybhuD" 00:14:38.378 } 00:14:38.378 } 00:14:38.378 Got JSON-RPC error response 00:14:38.378 GoRPCClient: error on JSON-RPC call 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83579 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83579 ']' 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83579 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83579 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:38.378 killing process with pid 83579 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83579' 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83579 00:14:38.378 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83579 00:14:38.378 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.378 00:14:38.378 Latency(us) 00:14:38.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.378 =================================================================================================================== 00:14:38.378 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 83321 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83321 ']' 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83321 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83321 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:38.638 killing process with pid 83321 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83321' 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83321 00:14:38.638 [2024-07-24 18:01:45.459518] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:38.638 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83321 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83616 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83616 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83616 ']' 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:38.904 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.904 [2024-07-24 18:01:45.754646] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:38.904 [2024-07-24 18:01:45.754819] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.162 [2024-07-24 18:01:45.898563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.162 [2024-07-24 18:01:46.014643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.162 [2024-07-24 18:01:46.014701] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.162 [2024-07-24 18:01:46.014713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.162 [2024-07-24 18:01:46.014722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.162 [2024-07-24 18:01:46.014730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.162 [2024-07-24 18:01:46.014763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.832 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.832 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:39.832 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.832 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.832 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.q58C0ybhuD 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.q58C0ybhuD 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.q58C0ybhuD 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q58C0ybhuD 00:14:40.091 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:40.352 [2024-07-24 18:01:47.073011] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.352 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:40.613 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:40.873 [2024-07-24 18:01:47.597135] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:40.873 [2024-07-24 18:01:47.597686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.873 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.132 malloc0 00:14:41.132 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:41.391 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q58C0ybhuD 00:14:41.652 [2024-07-24 18:01:48.432901] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:41.652 [2024-07-24 18:01:48.432951] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:41.652 [2024-07-24 18:01:48.432989] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:41.652 2024/07/24 18:01:48 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.q58C0ybhuD], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:14:41.652 request: 00:14:41.652 { 00:14:41.652 "method": "nvmf_subsystem_add_host", 00:14:41.652 "params": { 00:14:41.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.653 "host": "nqn.2016-06.io.spdk:host1", 00:14:41.653 "psk": "/tmp/tmp.q58C0ybhuD" 00:14:41.653 } 00:14:41.653 } 00:14:41.653 Got JSON-RPC error response 00:14:41.653 GoRPCClient: error on JSON-RPC call 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 83616 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83616 ']' 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83616 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83616 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:41.653 killing process with pid 83616 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83616' 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83616 00:14:41.653 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83616 00:14:41.911 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.q58C0ybhuD 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83732 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83732 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83732 ']' 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.912 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.912 [2024-07-24 18:01:48.765137] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:41.912 [2024-07-24 18:01:48.765237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.172 [2024-07-24 18:01:48.902529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.172 [2024-07-24 18:01:49.032364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.172 [2024-07-24 18:01:49.032451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.172 [2024-07-24 18:01:49.032472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.172 [2024-07-24 18:01:49.032488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.172 [2024-07-24 18:01:49.032502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.172 [2024-07-24 18:01:49.032558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.q58C0ybhuD 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q58C0ybhuD 00:14:43.140 18:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:43.140 [2024-07-24 18:01:50.099798] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.399 18:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:43.399 18:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:43.656 [2024-07-24 18:01:50.595892] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:43.656 [2024-07-24 18:01:50.596131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.657 18:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:43.914 malloc0 00:14:43.914 18:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:44.479 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q58C0ybhuD 00:14:44.736 [2024-07-24 18:01:51.530162] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=83834 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 83834 /var/tmp/bdevperf.sock 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83834 ']' 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.736 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.736 [2024-07-24 18:01:51.598366] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:44.736 [2024-07-24 18:01:51.598469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83834 ] 00:14:44.995 [2024-07-24 18:01:51.737412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.995 [2024-07-24 18:01:51.847869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.995 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.995 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:44.995 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q58C0ybhuD 00:14:45.560 [2024-07-24 18:01:52.280163] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.560 [2024-07-24 18:01:52.280299] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:45.560 TLSTESTn1 00:14:45.560 18:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:46.124 18:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:46.124 "subsystems": [ 00:14:46.124 { 00:14:46.124 "subsystem": "keyring", 00:14:46.124 "config": [] 00:14:46.124 }, 00:14:46.124 { 00:14:46.124 "subsystem": "iobuf", 00:14:46.124 "config": [ 00:14:46.124 { 00:14:46.124 "method": "iobuf_set_options", 00:14:46.124 "params": { 00:14:46.124 "large_bufsize": 135168, 00:14:46.124 "large_pool_count": 1024, 00:14:46.124 "small_bufsize": 8192, 00:14:46.124 "small_pool_count": 8192 00:14:46.124 } 00:14:46.124 } 00:14:46.124 ] 00:14:46.124 }, 00:14:46.124 { 00:14:46.124 "subsystem": "sock", 00:14:46.124 "config": [ 00:14:46.124 { 00:14:46.124 "method": "sock_set_default_impl", 00:14:46.124 "params": { 00:14:46.124 "impl_name": "posix" 00:14:46.124 } 00:14:46.124 }, 00:14:46.124 { 00:14:46.124 "method": "sock_impl_set_options", 00:14:46.124 "params": { 00:14:46.124 "enable_ktls": false, 00:14:46.124 "enable_placement_id": 0, 00:14:46.124 "enable_quickack": false, 00:14:46.124 "enable_recv_pipe": true, 00:14:46.124 "enable_zerocopy_send_client": false, 00:14:46.124 "enable_zerocopy_send_server": true, 00:14:46.124 "impl_name": "ssl", 00:14:46.124 "recv_buf_size": 4096, 00:14:46.124 "send_buf_size": 4096, 00:14:46.124 "tls_version": 0, 00:14:46.124 "zerocopy_threshold": 0 00:14:46.124 } 00:14:46.124 }, 00:14:46.124 { 00:14:46.124 "method": "sock_impl_set_options", 00:14:46.124 "params": { 00:14:46.124 "enable_ktls": false, 00:14:46.124 "enable_placement_id": 0, 00:14:46.124 "enable_quickack": false, 00:14:46.124 "enable_recv_pipe": true, 00:14:46.124 "enable_zerocopy_send_client": false, 00:14:46.124 "enable_zerocopy_send_server": true, 00:14:46.124 "impl_name": "posix", 00:14:46.124 "recv_buf_size": 2097152, 00:14:46.124 "send_buf_size": 2097152, 00:14:46.124 "tls_version": 0, 00:14:46.124 "zerocopy_threshold": 0 00:14:46.124 } 00:14:46.124 } 00:14:46.124 ] 00:14:46.124 }, 00:14:46.124 { 00:14:46.124 "subsystem": "vmd", 00:14:46.124 "config": [] 00:14:46.124 }, 00:14:46.124 { 00:14:46.124 "subsystem": "accel", 00:14:46.124 "config": [ 00:14:46.124 { 00:14:46.124 "method": "accel_set_options", 00:14:46.124 "params": { 00:14:46.124 "buf_count": 2048, 00:14:46.124 "large_cache_size": 16, 00:14:46.124 "sequence_count": 2048, 00:14:46.124 "small_cache_size": 128, 00:14:46.124 "task_count": 2048 00:14:46.124 } 00:14:46.124 } 00:14:46.124 ] 00:14:46.124 }, 00:14:46.124 { 00:14:46.124 "subsystem": "bdev", 00:14:46.124 "config": [ 00:14:46.124 { 00:14:46.124 "method": "bdev_set_options", 00:14:46.124 "params": { 00:14:46.124 "bdev_auto_examine": true, 00:14:46.125 "bdev_io_cache_size": 256, 00:14:46.125 "bdev_io_pool_size": 65535, 00:14:46.125 "iobuf_large_cache_size": 16, 00:14:46.125 "iobuf_small_cache_size": 128 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "bdev_raid_set_options", 00:14:46.125 "params": { 00:14:46.125 "process_max_bandwidth_mb_sec": 0, 00:14:46.125 "process_window_size_kb": 1024 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "bdev_iscsi_set_options", 00:14:46.125 "params": { 00:14:46.125 "timeout_sec": 30 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "bdev_nvme_set_options", 00:14:46.125 "params": { 00:14:46.125 "action_on_timeout": "none", 00:14:46.125 "allow_accel_sequence": false, 00:14:46.125 "arbitration_burst": 0, 00:14:46.125 "bdev_retry_count": 3, 00:14:46.125 "ctrlr_loss_timeout_sec": 0, 00:14:46.125 "delay_cmd_submit": true, 00:14:46.125 "dhchap_dhgroups": [ 00:14:46.125 "null", 00:14:46.125 "ffdhe2048", 00:14:46.125 "ffdhe3072", 00:14:46.125 "ffdhe4096", 00:14:46.125 "ffdhe6144", 00:14:46.125 "ffdhe8192" 00:14:46.125 ], 00:14:46.125 "dhchap_digests": [ 00:14:46.125 "sha256", 00:14:46.125 "sha384", 00:14:46.125 "sha512" 00:14:46.125 ], 00:14:46.125 "disable_auto_failback": false, 00:14:46.125 "fast_io_fail_timeout_sec": 0, 00:14:46.125 "generate_uuids": false, 00:14:46.125 "high_priority_weight": 0, 00:14:46.125 "io_path_stat": false, 00:14:46.125 "io_queue_requests": 0, 00:14:46.125 "keep_alive_timeout_ms": 10000, 00:14:46.125 "low_priority_weight": 0, 00:14:46.125 "medium_priority_weight": 0, 00:14:46.125 "nvme_adminq_poll_period_us": 10000, 00:14:46.125 "nvme_error_stat": false, 00:14:46.125 "nvme_ioq_poll_period_us": 0, 00:14:46.125 "rdma_cm_event_timeout_ms": 0, 00:14:46.125 "rdma_max_cq_size": 0, 00:14:46.125 "rdma_srq_size": 0, 00:14:46.125 "reconnect_delay_sec": 0, 00:14:46.125 "timeout_admin_us": 0, 00:14:46.125 "timeout_us": 0, 00:14:46.125 "transport_ack_timeout": 0, 00:14:46.125 "transport_retry_count": 4, 00:14:46.125 "transport_tos": 0 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "bdev_nvme_set_hotplug", 00:14:46.125 "params": { 00:14:46.125 "enable": false, 00:14:46.125 "period_us": 100000 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "bdev_malloc_create", 00:14:46.125 "params": { 00:14:46.125 "block_size": 4096, 00:14:46.125 "dif_is_head_of_md": false, 00:14:46.125 "dif_pi_format": 0, 00:14:46.125 "dif_type": 0, 00:14:46.125 "md_size": 0, 00:14:46.125 "name": "malloc0", 00:14:46.125 "num_blocks": 8192, 00:14:46.125 "optimal_io_boundary": 0, 00:14:46.125 "physical_block_size": 4096, 00:14:46.125 "uuid": "f51358aa-a4cc-4548-b230-e5577ddb81db" 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "bdev_wait_for_examine" 00:14:46.125 } 00:14:46.125 ] 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "subsystem": "nbd", 00:14:46.125 "config": [] 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "subsystem": "scheduler", 00:14:46.125 "config": [ 00:14:46.125 { 00:14:46.125 "method": "framework_set_scheduler", 00:14:46.125 "params": { 00:14:46.125 "name": "static" 00:14:46.125 } 00:14:46.125 } 00:14:46.125 ] 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "subsystem": "nvmf", 00:14:46.125 "config": [ 00:14:46.125 { 00:14:46.125 "method": "nvmf_set_config", 00:14:46.125 "params": { 00:14:46.125 "admin_cmd_passthru": { 00:14:46.125 "identify_ctrlr": false 00:14:46.125 }, 00:14:46.125 "discovery_filter": "match_any" 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "nvmf_set_max_subsystems", 00:14:46.125 "params": { 00:14:46.125 "max_subsystems": 1024 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "nvmf_set_crdt", 00:14:46.125 "params": { 00:14:46.125 "crdt1": 0, 00:14:46.125 "crdt2": 0, 00:14:46.125 "crdt3": 0 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "nvmf_create_transport", 00:14:46.125 "params": { 00:14:46.125 "abort_timeout_sec": 1, 00:14:46.125 "ack_timeout": 0, 00:14:46.125 "buf_cache_size": 4294967295, 00:14:46.125 "c2h_success": false, 00:14:46.125 "data_wr_pool_size": 0, 00:14:46.125 "dif_insert_or_strip": false, 00:14:46.125 "in_capsule_data_size": 4096, 00:14:46.125 "io_unit_size": 131072, 00:14:46.125 "max_aq_depth": 128, 00:14:46.125 "max_io_qpairs_per_ctrlr": 127, 00:14:46.125 "max_io_size": 131072, 00:14:46.125 "max_queue_depth": 128, 00:14:46.125 "num_shared_buffers": 511, 00:14:46.125 "sock_priority": 0, 00:14:46.125 "trtype": "TCP", 00:14:46.125 "zcopy": false 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "nvmf_create_subsystem", 00:14:46.125 "params": { 00:14:46.125 "allow_any_host": false, 00:14:46.125 "ana_reporting": false, 00:14:46.125 "max_cntlid": 65519, 00:14:46.125 "max_namespaces": 10, 00:14:46.125 "min_cntlid": 1, 00:14:46.125 "model_number": "SPDK bdev Controller", 00:14:46.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.125 "serial_number": "SPDK00000000000001" 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "nvmf_subsystem_add_host", 00:14:46.125 "params": { 00:14:46.125 "host": "nqn.2016-06.io.spdk:host1", 00:14:46.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.125 "psk": "/tmp/tmp.q58C0ybhuD" 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "nvmf_subsystem_add_ns", 00:14:46.125 "params": { 00:14:46.125 "namespace": { 00:14:46.125 "bdev_name": "malloc0", 00:14:46.125 "nguid": "F51358AAA4CC4548B230E5577DDB81DB", 00:14:46.125 "no_auto_visible": false, 00:14:46.125 "nsid": 1, 00:14:46.125 "uuid": "f51358aa-a4cc-4548-b230-e5577ddb81db" 00:14:46.125 }, 00:14:46.125 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:14:46.125 } 00:14:46.125 }, 00:14:46.125 { 00:14:46.125 "method": "nvmf_subsystem_add_listener", 00:14:46.125 "params": { 00:14:46.125 "listen_address": { 00:14:46.125 "adrfam": "IPv4", 00:14:46.125 "traddr": "10.0.0.2", 00:14:46.125 "trsvcid": "4420", 00:14:46.125 "trtype": "TCP" 00:14:46.125 }, 00:14:46.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.125 "secure_channel": true 00:14:46.125 } 00:14:46.125 } 00:14:46.125 ] 00:14:46.125 } 00:14:46.125 ] 00:14:46.125 }' 00:14:46.125 18:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:46.382 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:46.382 "subsystems": [ 00:14:46.382 { 00:14:46.382 "subsystem": "keyring", 00:14:46.382 "config": [] 00:14:46.382 }, 00:14:46.382 { 00:14:46.382 "subsystem": "iobuf", 00:14:46.382 "config": [ 00:14:46.382 { 00:14:46.382 "method": "iobuf_set_options", 00:14:46.382 "params": { 00:14:46.382 "large_bufsize": 135168, 00:14:46.382 "large_pool_count": 1024, 00:14:46.382 "small_bufsize": 8192, 00:14:46.382 "small_pool_count": 8192 00:14:46.382 } 00:14:46.382 } 00:14:46.383 ] 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "subsystem": "sock", 00:14:46.383 "config": [ 00:14:46.383 { 00:14:46.383 "method": "sock_set_default_impl", 00:14:46.383 "params": { 00:14:46.383 "impl_name": "posix" 00:14:46.383 } 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "method": "sock_impl_set_options", 00:14:46.383 "params": { 00:14:46.383 "enable_ktls": false, 00:14:46.383 "enable_placement_id": 0, 00:14:46.383 "enable_quickack": false, 00:14:46.383 "enable_recv_pipe": true, 00:14:46.383 "enable_zerocopy_send_client": false, 00:14:46.383 "enable_zerocopy_send_server": true, 00:14:46.383 "impl_name": "ssl", 00:14:46.383 "recv_buf_size": 4096, 00:14:46.383 "send_buf_size": 4096, 00:14:46.383 "tls_version": 0, 00:14:46.383 "zerocopy_threshold": 0 00:14:46.383 } 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "method": "sock_impl_set_options", 00:14:46.383 "params": { 00:14:46.383 "enable_ktls": false, 00:14:46.383 "enable_placement_id": 0, 00:14:46.383 "enable_quickack": false, 00:14:46.383 "enable_recv_pipe": true, 00:14:46.383 "enable_zerocopy_send_client": false, 00:14:46.383 "enable_zerocopy_send_server": true, 00:14:46.383 "impl_name": "posix", 00:14:46.383 "recv_buf_size": 2097152, 00:14:46.383 "send_buf_size": 2097152, 00:14:46.383 "tls_version": 0, 00:14:46.383 "zerocopy_threshold": 0 00:14:46.383 } 00:14:46.383 } 00:14:46.383 ] 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "subsystem": "vmd", 00:14:46.383 "config": [] 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "subsystem": "accel", 00:14:46.383 "config": [ 00:14:46.383 { 00:14:46.383 "method": "accel_set_options", 00:14:46.383 "params": { 00:14:46.383 "buf_count": 2048, 00:14:46.383 "large_cache_size": 16, 00:14:46.383 "sequence_count": 2048, 00:14:46.383 "small_cache_size": 128, 00:14:46.383 "task_count": 2048 00:14:46.383 } 00:14:46.383 } 00:14:46.383 ] 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "subsystem": "bdev", 00:14:46.383 "config": [ 00:14:46.383 { 00:14:46.383 "method": "bdev_set_options", 00:14:46.383 "params": { 00:14:46.383 "bdev_auto_examine": true, 00:14:46.383 "bdev_io_cache_size": 256, 00:14:46.383 "bdev_io_pool_size": 65535, 00:14:46.383 "iobuf_large_cache_size": 16, 00:14:46.383 "iobuf_small_cache_size": 128 00:14:46.383 } 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "method": "bdev_raid_set_options", 00:14:46.383 "params": { 00:14:46.383 "process_max_bandwidth_mb_sec": 0, 00:14:46.383 "process_window_size_kb": 1024 00:14:46.383 } 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "method": "bdev_iscsi_set_options", 00:14:46.383 "params": { 00:14:46.383 "timeout_sec": 30 00:14:46.383 } 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "method": "bdev_nvme_set_options", 00:14:46.383 "params": { 00:14:46.383 "action_on_timeout": "none", 00:14:46.383 "allow_accel_sequence": false, 00:14:46.383 "arbitration_burst": 0, 00:14:46.383 "bdev_retry_count": 3, 00:14:46.383 "ctrlr_loss_timeout_sec": 0, 00:14:46.383 "delay_cmd_submit": true, 00:14:46.383 "dhchap_dhgroups": [ 00:14:46.383 "null", 00:14:46.383 "ffdhe2048", 00:14:46.383 "ffdhe3072", 00:14:46.383 "ffdhe4096", 00:14:46.383 "ffdhe6144", 00:14:46.383 "ffdhe8192" 00:14:46.383 ], 00:14:46.383 "dhchap_digests": [ 00:14:46.383 "sha256", 00:14:46.383 "sha384", 00:14:46.383 "sha512" 00:14:46.383 ], 00:14:46.383 "disable_auto_failback": false, 00:14:46.383 "fast_io_fail_timeout_sec": 0, 00:14:46.383 "generate_uuids": false, 00:14:46.383 "high_priority_weight": 0, 00:14:46.383 "io_path_stat": false, 00:14:46.383 "io_queue_requests": 512, 00:14:46.383 "keep_alive_timeout_ms": 10000, 00:14:46.383 "low_priority_weight": 0, 00:14:46.383 "medium_priority_weight": 0, 00:14:46.383 "nvme_adminq_poll_period_us": 10000, 00:14:46.383 "nvme_error_stat": false, 00:14:46.383 "nvme_ioq_poll_period_us": 0, 00:14:46.383 "rdma_cm_event_timeout_ms": 0, 00:14:46.383 "rdma_max_cq_size": 0, 00:14:46.383 "rdma_srq_size": 0, 00:14:46.383 "reconnect_delay_sec": 0, 00:14:46.383 "timeout_admin_us": 0, 00:14:46.383 "timeout_us": 0, 00:14:46.383 "transport_ack_timeout": 0, 00:14:46.383 "transport_retry_count": 4, 00:14:46.383 "transport_tos": 0 00:14:46.383 } 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "method": "bdev_nvme_attach_controller", 00:14:46.383 "params": { 00:14:46.383 "adrfam": "IPv4", 00:14:46.383 "ctrlr_loss_timeout_sec": 0, 00:14:46.383 "ddgst": false, 00:14:46.383 "fast_io_fail_timeout_sec": 0, 00:14:46.383 "hdgst": false, 00:14:46.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.383 "name": "TLSTEST", 00:14:46.383 "prchk_guard": false, 00:14:46.383 "prchk_reftag": false, 00:14:46.383 "psk": "/tmp/tmp.q58C0ybhuD", 00:14:46.383 "reconnect_delay_sec": 0, 00:14:46.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.383 "traddr": "10.0.0.2", 00:14:46.383 "trsvcid": "4420", 00:14:46.383 "trtype": "TCP" 00:14:46.383 } 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "method": "bdev_nvme_set_hotplug", 00:14:46.383 "params": { 00:14:46.383 "enable": false, 00:14:46.383 "period_us": 100000 00:14:46.383 } 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "method": "bdev_wait_for_examine" 00:14:46.383 } 00:14:46.383 ] 00:14:46.383 }, 00:14:46.383 { 00:14:46.383 "subsystem": "nbd", 00:14:46.383 "config": [] 00:14:46.383 } 00:14:46.383 ] 00:14:46.383 }' 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 83834 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83834 ']' 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83834 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83834 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:46.383 killing process with pid 83834 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83834' 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83834 00:14:46.383 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83834 00:14:46.383 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.383 00:14:46.383 Latency(us) 00:14:46.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.383 =================================================================================================================== 00:14:46.383 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.383 [2024-07-24 18:01:53.260113] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 83732 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83732 ']' 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83732 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83732 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:46.640 killing process with pid 83732 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83732' 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83732 00:14:46.640 [2024-07-24 18:01:53.486489] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:46.640 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83732 00:14:46.898 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:46.898 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.898 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.898 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.898 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:46.898 "subsystems": [ 00:14:46.898 { 00:14:46.898 "subsystem": "keyring", 00:14:46.898 "config": [] 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "subsystem": "iobuf", 00:14:46.898 "config": [ 00:14:46.898 { 00:14:46.898 "method": "iobuf_set_options", 00:14:46.898 "params": { 00:14:46.898 "large_bufsize": 135168, 00:14:46.898 "large_pool_count": 1024, 00:14:46.898 "small_bufsize": 8192, 00:14:46.898 "small_pool_count": 8192 00:14:46.898 } 00:14:46.898 } 00:14:46.898 ] 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "subsystem": "sock", 00:14:46.898 "config": [ 00:14:46.898 { 00:14:46.898 "method": "sock_set_default_impl", 00:14:46.898 "params": { 00:14:46.898 "impl_name": "posix" 00:14:46.898 } 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "method": "sock_impl_set_options", 00:14:46.898 "params": { 00:14:46.898 "enable_ktls": false, 00:14:46.898 "enable_placement_id": 0, 00:14:46.898 "enable_quickack": false, 00:14:46.898 "enable_recv_pipe": true, 00:14:46.898 "enable_zerocopy_send_client": false, 00:14:46.898 "enable_zerocopy_send_server": true, 00:14:46.898 "impl_name": "ssl", 00:14:46.898 "recv_buf_size": 4096, 00:14:46.898 "send_buf_size": 4096, 00:14:46.898 "tls_version": 0, 00:14:46.898 "zerocopy_threshold": 0 00:14:46.898 } 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "method": "sock_impl_set_options", 00:14:46.898 "params": { 00:14:46.898 "enable_ktls": false, 00:14:46.898 "enable_placement_id": 0, 00:14:46.898 "enable_quickack": false, 00:14:46.898 "enable_recv_pipe": true, 00:14:46.898 "enable_zerocopy_send_client": false, 00:14:46.898 "enable_zerocopy_send_server": true, 00:14:46.898 "impl_name": "posix", 00:14:46.898 "recv_buf_size": 2097152, 00:14:46.898 "send_buf_size": 2097152, 00:14:46.898 "tls_version": 0, 00:14:46.898 "zerocopy_threshold": 0 00:14:46.898 } 00:14:46.898 } 00:14:46.898 ] 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "subsystem": "vmd", 00:14:46.898 "config": [] 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "subsystem": "accel", 00:14:46.898 "config": [ 00:14:46.898 { 00:14:46.898 "method": "accel_set_options", 00:14:46.898 "params": { 00:14:46.898 "buf_count": 2048, 00:14:46.898 "large_cache_size": 16, 00:14:46.898 "sequence_count": 2048, 00:14:46.898 "small_cache_size": 128, 00:14:46.898 "task_count": 2048 00:14:46.898 } 00:14:46.898 } 00:14:46.898 ] 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "subsystem": "bdev", 00:14:46.898 "config": [ 00:14:46.898 { 00:14:46.898 "method": "bdev_set_options", 00:14:46.898 "params": { 00:14:46.898 "bdev_auto_examine": true, 00:14:46.898 "bdev_io_cache_size": 256, 00:14:46.898 "bdev_io_pool_size": 65535, 00:14:46.898 "iobuf_large_cache_size": 16, 00:14:46.898 "iobuf_small_cache_size": 128 00:14:46.898 } 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "method": "bdev_raid_set_options", 00:14:46.898 "params": { 00:14:46.898 "process_max_bandwidth_mb_sec": 0, 00:14:46.898 "process_window_size_kb": 1024 00:14:46.898 } 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "method": "bdev_iscsi_set_options", 00:14:46.898 "params": { 00:14:46.898 "timeout_sec": 30 00:14:46.898 } 00:14:46.898 }, 00:14:46.898 { 00:14:46.898 "method": "bdev_nvme_set_options", 00:14:46.898 "params": { 00:14:46.898 "action_on_timeout": "none", 00:14:46.898 "allow_accel_sequence": false, 00:14:46.898 "arbitration_burst": 0, 00:14:46.899 "bdev_retry_count": 3, 00:14:46.899 "ctrlr_loss_timeout_sec": 0, 00:14:46.899 "delay_cmd_submit": true, 00:14:46.899 "dhchap_dhgroups": [ 00:14:46.899 "null", 00:14:46.899 "ffdhe2048", 00:14:46.899 "ffdhe3072", 00:14:46.899 "ffdhe4096", 00:14:46.899 "ffdhe6144", 00:14:46.899 "ffdhe8192" 00:14:46.899 ], 00:14:46.899 "dhchap_digests": [ 00:14:46.899 "sha256", 00:14:46.899 "sha384", 00:14:46.899 "sha512" 00:14:46.899 ], 00:14:46.899 "disable_auto_failback": false, 00:14:46.899 "fast_io_fail_timeout_sec": 0, 00:14:46.899 "generate_uuids": false, 00:14:46.899 "high_priority_weight": 0, 00:14:46.899 "io_path_stat": false, 00:14:46.899 "io_queue_requests": 0, 00:14:46.899 "keep_alive_timeout_ms": 10000, 00:14:46.899 "low_priority_weight": 0, 00:14:46.899 "medium_priority_weight": 0, 00:14:46.899 "nvme_adminq_poll_period_us": 10000, 00:14:46.899 "nvme_error_stat": false, 00:14:46.899 "nvme_ioq_poll_period_us": 0, 00:14:46.899 "rdma_cm_event_timeout_ms": 0, 00:14:46.899 "rdma_max_cq_size": 0, 00:14:46.899 "rdma_srq_size": 0, 00:14:46.899 "reconnect_delay_sec": 0, 00:14:46.899 "timeout_admin_us": 0, 00:14:46.899 "timeout_us": 0, 00:14:46.899 "transport_ack_timeout": 0, 00:14:46.899 "transport_retry_count": 4, 00:14:46.899 "transport_tos": 0 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "bdev_nvme_set_hotplug", 00:14:46.899 "params": { 00:14:46.899 "enable": false, 00:14:46.899 "period_us": 100000 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "bdev_malloc_create", 00:14:46.899 "params": { 00:14:46.899 "block_size": 4096, 00:14:46.899 "dif_is_head_of_md": false, 00:14:46.899 "dif_pi_format": 0, 00:14:46.899 "dif_type": 0, 00:14:46.899 "md_size": 0, 00:14:46.899 "name": "malloc0", 00:14:46.899 "num_blocks": 8192, 00:14:46.899 "optimal_io_boundary": 0, 00:14:46.899 "physical_block_size": 4096, 00:14:46.899 "uuid": "f51358aa-a4cc-4548-b230-e5577ddb81db" 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "bdev_wait_for_examine" 00:14:46.899 } 00:14:46.899 ] 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "subsystem": "nbd", 00:14:46.899 "config": [] 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "subsystem": "scheduler", 00:14:46.899 "config": [ 00:14:46.899 { 00:14:46.899 "method": "framework_set_scheduler", 00:14:46.899 "params": { 00:14:46.899 "name": "static" 00:14:46.899 } 00:14:46.899 } 00:14:46.899 ] 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "subsystem": "nvmf", 00:14:46.899 "config": [ 00:14:46.899 { 00:14:46.899 "method": "nvmf_set_config", 00:14:46.899 "params": { 00:14:46.899 "admin_cmd_passthru": { 00:14:46.899 "identify_ctrlr": false 00:14:46.899 }, 00:14:46.899 "discovery_filter": "match_any" 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "nvmf_set_max_subsystems", 00:14:46.899 "params": { 00:14:46.899 "max_subsystems": 1024 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "nvmf_set_crdt", 00:14:46.899 "params": { 00:14:46.899 "crdt1": 0, 00:14:46.899 "crdt2": 0, 00:14:46.899 "crdt3": 0 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "nvmf_create_transport", 00:14:46.899 "params": { 00:14:46.899 "abort_timeout_sec": 1, 00:14:46.899 "ack_timeout": 0, 00:14:46.899 "buf_cache_size": 4294967295, 00:14:46.899 "c2h_success": false, 00:14:46.899 "data_wr_pool_size": 0, 00:14:46.899 "dif_insert_or_strip": false, 00:14:46.899 "in_capsule_data_size": 4096, 00:14:46.899 "io_unit_size": 131072, 00:14:46.899 "max_aq_depth": 128, 00:14:46.899 "max_io_qpairs_per_ctrlr": 127, 00:14:46.899 "max_io_size": 131072, 00:14:46.899 "max_queue_depth": 128, 00:14:46.899 "num_shared_buffers": 511, 00:14:46.899 "sock_priority": 0, 00:14:46.899 "trtype": "TCP", 00:14:46.899 "zcopy": false 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "nvmf_create_subsystem", 00:14:46.899 "params": { 00:14:46.899 "allow_any_host": false, 00:14:46.899 "ana_reporting": false, 00:14:46.899 "max_cntlid": 65519, 00:14:46.899 "max_namespaces": 10, 00:14:46.899 "min_cntlid": 1, 00:14:46.899 "model_number": "SPDK bdev Controller", 00:14:46.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.899 "serial_number": "SPDK00000000000001" 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "nvmf_subsystem_add_host", 00:14:46.899 "params": { 00:14:46.899 "host": "nqn.2016-06.io.spdk:host1", 00:14:46.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.899 "psk": "/tmp/tmp.q58C0ybhuD" 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "nvmf_subsystem_add_ns", 00:14:46.899 "params": { 00:14:46.899 "namespace": { 00:14:46.899 "bdev_name": "malloc0", 00:14:46.899 "nguid": "F51358AAA4CC4548B230E5577DDB81DB", 00:14:46.899 "no_auto_visible": false, 00:14:46.899 "nsid": 1, 00:14:46.899 "uuid": "f51358aa-a4cc-4548-b230-e5577ddb81db" 00:14:46.899 }, 00:14:46.899 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:14:46.899 } 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "method": "nvmf_subsystem_add_listener", 00:14:46.899 "params": { 00:14:46.899 "listen_address": { 00:14:46.899 "adrfam": "IPv4", 00:14:46.899 "traddr": "10.0.0.2", 00:14:46.899 "trsvcid": "4420", 00:14:46.899 "trtype": "TCP" 00:14:46.899 }, 00:14:46.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.899 "secure_channel": true 00:14:46.899 } 00:14:46.899 } 00:14:46.899 ] 00:14:46.899 } 00:14:46.899 ] 00:14:46.899 }' 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83895 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83895 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83895 ']' 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.899 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.899 [2024-07-24 18:01:53.761913] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:46.899 [2024-07-24 18:01:53.762042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.157 [2024-07-24 18:01:53.902656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.157 [2024-07-24 18:01:54.038155] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.157 [2024-07-24 18:01:54.038265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.157 [2024-07-24 18:01:54.038284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.157 [2024-07-24 18:01:54.038299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.157 [2024-07-24 18:01:54.038322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.157 [2024-07-24 18:01:54.038456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.415 [2024-07-24 18:01:54.252997] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.415 [2024-07-24 18:01:54.268927] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:47.415 [2024-07-24 18:01:54.284929] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.415 [2024-07-24 18:01:54.285177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=83945 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 83945 /var/tmp/bdevperf.sock 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83945 ']' 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.982 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:47.982 "subsystems": [ 00:14:47.982 { 00:14:47.982 "subsystem": "keyring", 00:14:47.982 "config": [] 00:14:47.982 }, 00:14:47.982 { 00:14:47.982 "subsystem": "iobuf", 00:14:47.982 "config": [ 00:14:47.982 { 00:14:47.982 "method": "iobuf_set_options", 00:14:47.982 "params": { 00:14:47.982 "large_bufsize": 135168, 00:14:47.982 "large_pool_count": 1024, 00:14:47.982 "small_bufsize": 8192, 00:14:47.982 "small_pool_count": 8192 00:14:47.982 } 00:14:47.982 } 00:14:47.982 ] 00:14:47.982 }, 00:14:47.982 { 00:14:47.982 "subsystem": "sock", 00:14:47.982 "config": [ 00:14:47.982 { 00:14:47.982 "method": "sock_set_default_impl", 00:14:47.982 "params": { 00:14:47.982 "impl_name": "posix" 00:14:47.982 } 00:14:47.982 }, 00:14:47.982 { 00:14:47.982 "method": "sock_impl_set_options", 00:14:47.982 "params": { 00:14:47.983 "enable_ktls": false, 00:14:47.983 "enable_placement_id": 0, 00:14:47.983 "enable_quickack": false, 00:14:47.983 "enable_recv_pipe": true, 00:14:47.983 "enable_zerocopy_send_client": false, 00:14:47.983 "enable_zerocopy_send_server": true, 00:14:47.983 "impl_name": "ssl", 00:14:47.983 "recv_buf_size": 4096, 00:14:47.983 "send_buf_size": 4096, 00:14:47.983 "tls_version": 0, 00:14:47.983 "zerocopy_threshold": 0 00:14:47.983 } 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "method": "sock_impl_set_options", 00:14:47.983 "params": { 00:14:47.983 "enable_ktls": false, 00:14:47.983 "enable_placement_id": 0, 00:14:47.983 "enable_quickack": false, 00:14:47.983 "enable_recv_pipe": true, 00:14:47.983 "enable_zerocopy_send_client": false, 00:14:47.983 "enable_zerocopy_send_server": true, 00:14:47.983 "impl_name": "posix", 00:14:47.983 "recv_buf_size": 2097152, 00:14:47.983 "send_buf_size": 2097152, 00:14:47.983 "tls_version": 0, 00:14:47.983 "zerocopy_threshold": 0 00:14:47.983 } 00:14:47.983 } 00:14:47.983 ] 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "subsystem": "vmd", 00:14:47.983 "config": [] 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "subsystem": "accel", 00:14:47.983 "config": [ 00:14:47.983 { 00:14:47.983 "method": "accel_set_options", 00:14:47.983 "params": { 00:14:47.983 "buf_count": 2048, 00:14:47.983 "large_cache_size": 16, 00:14:47.983 "sequence_count": 2048, 00:14:47.983 "small_cache_size": 128, 00:14:47.983 "task_count": 2048 00:14:47.983 } 00:14:47.983 } 00:14:47.983 ] 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "subsystem": "bdev", 00:14:47.983 "config": [ 00:14:47.983 { 00:14:47.983 "method": "bdev_set_options", 00:14:47.983 "params": { 00:14:47.983 "bdev_auto_examine": true, 00:14:47.983 "bdev_io_cache_size": 256, 00:14:47.983 "bdev_io_pool_size": 65535, 00:14:47.983 "iobuf_large_cache_size": 16, 00:14:47.983 "iobuf_small_cache_size": 128 00:14:47.983 } 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "method": "bdev_raid_set_options", 00:14:47.983 "params": { 00:14:47.983 "process_max_bandwidth_mb_sec": 0, 00:14:47.983 "process_window_size_kb": 1024 00:14:47.983 } 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "method": "bdev_iscsi_set_options", 00:14:47.983 "params": { 00:14:47.983 "timeout_sec": 30 00:14:47.983 } 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "method": "bdev_nvme_set_options", 00:14:47.983 "params": { 00:14:47.983 "action_on_timeout": "none", 00:14:47.983 "allow_accel_sequence": false, 00:14:47.983 "arbitration_burst": 0, 00:14:47.983 "bdev_retry_count": 3, 00:14:47.983 "ctrlr_loss_timeout_sec": 0, 00:14:47.983 "delay_cmd_submit": true, 00:14:47.983 "dhchap_dhgroups": [ 00:14:47.983 "null", 00:14:47.983 "ffdhe2048", 00:14:47.983 "ffdhe3072", 00:14:47.983 "ffdhe4096", 00:14:47.983 "ffdhe6144", 00:14:47.983 "ffdhe8192" 00:14:47.983 ], 00:14:47.983 "dhchap_digests": [ 00:14:47.983 "sha256", 00:14:47.983 "sha384", 00:14:47.983 "sha512" 00:14:47.983 ], 00:14:47.983 "disable_auto_failback": false, 00:14:47.983 "fast_io_fail_timeout_sec": 0, 00:14:47.983 "generate_uuids": false, 00:14:47.983 "high_priority_weight": 0, 00:14:47.983 "io_path_stat": false, 00:14:47.983 "io_queue_requests": 512, 00:14:47.983 "keep_alive_timeout_ms": 10000, 00:14:47.983 "low_priority_weight": 0, 00:14:47.983 "medium_priority_weight": 0, 00:14:47.983 "nvme_adminq_poll_period_us": 10000, 00:14:47.983 "nvme_error_stat": false, 00:14:47.983 "nvme_ioq_poll_period_us": 0, 00:14:47.983 "rdma_cm_event_timeout_ms": 0, 00:14:47.983 "rdma_max_cq_size": 0, 00:14:47.983 "rdma_srq_size": 0, 00:14:47.983 "reconnect_delay_sec": 0, 00:14:47.983 "timeout_admin_us": 0, 00:14:47.983 "timeout_us": 0, 00:14:47.983 "transport_ack_timeout": 0, 00:14:47.983 "transport_retry_count": 4, 00:14:47.983 "transport_tos": 0 00:14:47.983 } 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "method": "bdev_nvme_attach_controller", 00:14:47.983 "params": { 00:14:47.983 "adrfam": "IPv4", 00:14:47.983 "ctrlr_loss_timeout_sec": 0, 00:14:47.983 "ddgst": false, 00:14:47.983 "fast_io_fail_timeout_sec": 0, 00:14:47.983 "hdgst": false, 00:14:47.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:47.983 "name": "TLSTEST", 00:14:47.983 "prchk_guard": false, 00:14:47.983 "prchk_reftag": false, 00:14:47.983 "psk": "/tmp/tmp.q58C0ybhuD", 00:14:47.983 "reconnect_delay_sec": 0, 00:14:47.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.983 "traddr": "10.0.0.2", 00:14:47.983 "trsvcid": "4420", 00:14:47.983 "trtype": "TCP" 00:14:47.983 } 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "method": "bdev_nvme_set_hotplug", 00:14:47.983 "params": { 00:14:47.983 "enable": false, 00:14:47.983 "period_us": 100000 00:14:47.983 } 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "method": "bdev_wait_for_examine" 00:14:47.983 } 00:14:47.983 ] 00:14:47.983 }, 00:14:47.983 { 00:14:47.983 "subsystem": "nbd", 00:14:47.983 "config": [] 00:14:47.983 } 00:14:47.983 ] 00:14:47.983 }' 00:14:47.983 [2024-07-24 18:01:54.860288] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:47.983 [2024-07-24 18:01:54.860980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83945 ] 00:14:48.241 [2024-07-24 18:01:55.009372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.241 [2024-07-24 18:01:55.119820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.498 [2024-07-24 18:01:55.272932] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:48.498 [2024-07-24 18:01:55.273052] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:49.069 18:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.069 18:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:49.069 18:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:49.069 Running I/O for 10 seconds... 00:14:59.044 00:14:59.044 Latency(us) 00:14:59.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.044 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:59.044 Verification LBA range: start 0x0 length 0x2000 00:14:59.044 TLSTESTn1 : 10.02 3918.11 15.31 0.00 0.00 32601.82 6459.98 27837.20 00:14:59.044 =================================================================================================================== 00:14:59.044 Total : 3918.11 15.31 0.00 0.00 32601.82 6459.98 27837.20 00:14:59.044 0 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 83945 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83945 ']' 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83945 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83945 00:14:59.044 killing process with pid 83945 00:14:59.044 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.044 00:14:59.044 Latency(us) 00:14:59.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.044 =================================================================================================================== 00:14:59.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83945' 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83945 00:14:59.044 [2024-07-24 18:02:05.970500] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:59.044 18:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83945 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 83895 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83895 ']' 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83895 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83895 00:14:59.304 killing process with pid 83895 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83895' 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83895 00:14:59.304 [2024-07-24 18:02:06.206746] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:59.304 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83895 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84090 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84090 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84090 ']' 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.562 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.562 [2024-07-24 18:02:06.489352] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:14:59.562 [2024-07-24 18:02:06.489781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.821 [2024-07-24 18:02:06.637522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.821 [2024-07-24 18:02:06.753697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.821 [2024-07-24 18:02:06.753759] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.821 [2024-07-24 18:02:06.753774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.821 [2024-07-24 18:02:06.753787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.821 [2024-07-24 18:02:06.753798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.821 [2024-07-24 18:02:06.753834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.q58C0ybhuD 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q58C0ybhuD 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:00.760 [2024-07-24 18:02:07.669748] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.760 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:01.018 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:01.277 [2024-07-24 18:02:08.133816] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:01.277 [2024-07-24 18:02:08.134047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.277 18:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:01.537 malloc0 00:15:01.537 18:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:01.795 18:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q58C0ybhuD 00:15:02.365 [2024-07-24 18:02:09.039739] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84193 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84193 /var/tmp/bdevperf.sock 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84193 ']' 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.365 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.365 [2024-07-24 18:02:09.128568] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:02.365 [2024-07-24 18:02:09.128671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84193 ] 00:15:02.365 [2024-07-24 18:02:09.272464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.625 [2024-07-24 18:02:09.397728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.193 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.193 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:03.193 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q58C0ybhuD 00:15:03.759 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:03.759 [2024-07-24 18:02:10.693576] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.017 nvme0n1 00:15:04.017 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.017 Running I/O for 1 seconds... 00:15:04.950 00:15:04.950 Latency(us) 00:15:04.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.950 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.950 Verification LBA range: start 0x0 length 0x2000 00:15:04.950 nvme0n1 : 1.01 4310.66 16.84 0.00 0.00 29444.46 5336.50 25090.93 00:15:04.950 =================================================================================================================== 00:15:04.950 Total : 4310.66 16.84 0.00 0.00 29444.46 5336.50 25090.93 00:15:04.950 0 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 84193 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84193 ']' 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84193 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84193 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:05.208 killing process with pid 84193 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84193' 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84193 00:15:05.208 Received shutdown signal, test time was about 1.000000 seconds 00:15:05.208 00:15:05.208 Latency(us) 00:15:05.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.208 =================================================================================================================== 00:15:05.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.208 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84193 00:15:05.208 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 84090 00:15:05.208 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84090 ']' 00:15:05.208 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84090 00:15:05.208 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:05.208 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84090 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:05.465 killing process with pid 84090 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84090' 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84090 00:15:05.465 [2024-07-24 18:02:12.208722] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84090 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84270 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84270 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84270 ']' 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:05.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:05.465 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.722 [2024-07-24 18:02:12.481624] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:05.722 [2024-07-24 18:02:12.481714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.722 [2024-07-24 18:02:12.618319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.980 [2024-07-24 18:02:12.725590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.980 [2024-07-24 18:02:12.725651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.980 [2024-07-24 18:02:12.725663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.980 [2024-07-24 18:02:12.725673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.980 [2024-07-24 18:02:12.725681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.980 [2024-07-24 18:02:12.725718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.544 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.544 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:06.544 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.544 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:06.544 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.802 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.802 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:06.802 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.802 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.802 [2024-07-24 18:02:13.560774] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.802 malloc0 00:15:06.803 [2024-07-24 18:02:13.590456] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:06.803 [2024-07-24 18:02:13.590706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=84320 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 84320 /var/tmp/bdevperf.sock 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84320 ']' 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.803 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.803 [2024-07-24 18:02:13.695745] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:06.803 [2024-07-24 18:02:13.695857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84320 ] 00:15:07.060 [2024-07-24 18:02:13.837895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.060 [2024-07-24 18:02:13.944882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.319 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.319 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:07.319 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q58C0ybhuD 00:15:07.319 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:07.577 [2024-07-24 18:02:14.502337] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.835 nvme0n1 00:15:07.835 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.835 Running I/O for 1 seconds... 00:15:09.251 00:15:09.251 Latency(us) 00:15:09.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.251 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:09.251 Verification LBA range: start 0x0 length 0x2000 00:15:09.251 nvme0n1 : 1.03 3603.78 14.08 0.00 0.00 35033.57 7240.17 26713.72 00:15:09.251 =================================================================================================================== 00:15:09.251 Total : 3603.78 14.08 0.00 0.00 35033.57 7240.17 26713.72 00:15:09.251 0 00:15:09.251 18:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:15:09.251 18:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.251 18:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.251 18:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.251 18:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:15:09.251 "subsystems": [ 00:15:09.251 { 00:15:09.251 "subsystem": "keyring", 00:15:09.251 "config": [ 00:15:09.251 { 00:15:09.251 "method": "keyring_file_add_key", 00:15:09.251 "params": { 00:15:09.251 "name": "key0", 00:15:09.251 "path": "/tmp/tmp.q58C0ybhuD" 00:15:09.251 } 00:15:09.251 } 00:15:09.251 ] 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "subsystem": "iobuf", 00:15:09.251 "config": [ 00:15:09.251 { 00:15:09.251 "method": "iobuf_set_options", 00:15:09.251 "params": { 00:15:09.251 "large_bufsize": 135168, 00:15:09.251 "large_pool_count": 1024, 00:15:09.251 "small_bufsize": 8192, 00:15:09.251 "small_pool_count": 8192 00:15:09.251 } 00:15:09.251 } 00:15:09.251 ] 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "subsystem": "sock", 00:15:09.251 "config": [ 00:15:09.251 { 00:15:09.251 "method": "sock_set_default_impl", 00:15:09.251 "params": { 00:15:09.251 "impl_name": "posix" 00:15:09.251 } 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "method": "sock_impl_set_options", 00:15:09.251 "params": { 00:15:09.251 "enable_ktls": false, 00:15:09.251 "enable_placement_id": 0, 00:15:09.251 "enable_quickack": false, 00:15:09.251 "enable_recv_pipe": true, 00:15:09.251 "enable_zerocopy_send_client": false, 00:15:09.251 "enable_zerocopy_send_server": true, 00:15:09.251 "impl_name": "ssl", 00:15:09.251 "recv_buf_size": 4096, 00:15:09.251 "send_buf_size": 4096, 00:15:09.251 "tls_version": 0, 00:15:09.251 "zerocopy_threshold": 0 00:15:09.251 } 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "method": "sock_impl_set_options", 00:15:09.251 "params": { 00:15:09.251 "enable_ktls": false, 00:15:09.251 "enable_placement_id": 0, 00:15:09.251 "enable_quickack": false, 00:15:09.251 "enable_recv_pipe": true, 00:15:09.251 "enable_zerocopy_send_client": false, 00:15:09.251 "enable_zerocopy_send_server": true, 00:15:09.251 "impl_name": "posix", 00:15:09.251 "recv_buf_size": 2097152, 00:15:09.251 "send_buf_size": 2097152, 00:15:09.251 "tls_version": 0, 00:15:09.251 "zerocopy_threshold": 0 00:15:09.251 } 00:15:09.251 } 00:15:09.251 ] 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "subsystem": "vmd", 00:15:09.251 "config": [] 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "subsystem": "accel", 00:15:09.251 "config": [ 00:15:09.251 { 00:15:09.251 "method": "accel_set_options", 00:15:09.251 "params": { 00:15:09.251 "buf_count": 2048, 00:15:09.251 "large_cache_size": 16, 00:15:09.251 "sequence_count": 2048, 00:15:09.251 "small_cache_size": 128, 00:15:09.251 "task_count": 2048 00:15:09.251 } 00:15:09.251 } 00:15:09.251 ] 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "subsystem": "bdev", 00:15:09.251 "config": [ 00:15:09.251 { 00:15:09.251 "method": "bdev_set_options", 00:15:09.251 "params": { 00:15:09.251 "bdev_auto_examine": true, 00:15:09.251 "bdev_io_cache_size": 256, 00:15:09.251 "bdev_io_pool_size": 65535, 00:15:09.251 "iobuf_large_cache_size": 16, 00:15:09.251 "iobuf_small_cache_size": 128 00:15:09.251 } 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "method": "bdev_raid_set_options", 00:15:09.251 "params": { 00:15:09.251 "process_max_bandwidth_mb_sec": 0, 00:15:09.251 "process_window_size_kb": 1024 00:15:09.251 } 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "method": "bdev_iscsi_set_options", 00:15:09.251 "params": { 00:15:09.251 "timeout_sec": 30 00:15:09.251 } 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "method": "bdev_nvme_set_options", 00:15:09.251 "params": { 00:15:09.251 "action_on_timeout": "none", 00:15:09.251 "allow_accel_sequence": false, 00:15:09.251 "arbitration_burst": 0, 00:15:09.251 "bdev_retry_count": 3, 00:15:09.251 "ctrlr_loss_timeout_sec": 0, 00:15:09.251 "delay_cmd_submit": true, 00:15:09.251 "dhchap_dhgroups": [ 00:15:09.251 "null", 00:15:09.251 "ffdhe2048", 00:15:09.251 "ffdhe3072", 00:15:09.251 "ffdhe4096", 00:15:09.251 "ffdhe6144", 00:15:09.251 "ffdhe8192" 00:15:09.251 ], 00:15:09.251 "dhchap_digests": [ 00:15:09.251 "sha256", 00:15:09.251 "sha384", 00:15:09.251 "sha512" 00:15:09.251 ], 00:15:09.251 "disable_auto_failback": false, 00:15:09.251 "fast_io_fail_timeout_sec": 0, 00:15:09.251 "generate_uuids": false, 00:15:09.251 "high_priority_weight": 0, 00:15:09.251 "io_path_stat": false, 00:15:09.251 "io_queue_requests": 0, 00:15:09.251 "keep_alive_timeout_ms": 10000, 00:15:09.251 "low_priority_weight": 0, 00:15:09.251 "medium_priority_weight": 0, 00:15:09.251 "nvme_adminq_poll_period_us": 10000, 00:15:09.251 "nvme_error_stat": false, 00:15:09.251 "nvme_ioq_poll_period_us": 0, 00:15:09.251 "rdma_cm_event_timeout_ms": 0, 00:15:09.251 "rdma_max_cq_size": 0, 00:15:09.251 "rdma_srq_size": 0, 00:15:09.251 "reconnect_delay_sec": 0, 00:15:09.251 "timeout_admin_us": 0, 00:15:09.251 "timeout_us": 0, 00:15:09.251 "transport_ack_timeout": 0, 00:15:09.251 "transport_retry_count": 4, 00:15:09.251 "transport_tos": 0 00:15:09.251 } 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "method": "bdev_nvme_set_hotplug", 00:15:09.251 "params": { 00:15:09.251 "enable": false, 00:15:09.251 "period_us": 100000 00:15:09.251 } 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "method": "bdev_malloc_create", 00:15:09.251 "params": { 00:15:09.251 "block_size": 4096, 00:15:09.251 "dif_is_head_of_md": false, 00:15:09.251 "dif_pi_format": 0, 00:15:09.251 "dif_type": 0, 00:15:09.251 "md_size": 0, 00:15:09.251 "name": "malloc0", 00:15:09.251 "num_blocks": 8192, 00:15:09.251 "optimal_io_boundary": 0, 00:15:09.251 "physical_block_size": 4096, 00:15:09.251 "uuid": "f0fd1763-75b2-4083-a144-7edb1454aec3" 00:15:09.251 } 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "method": "bdev_wait_for_examine" 00:15:09.251 } 00:15:09.251 ] 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "subsystem": "nbd", 00:15:09.251 "config": [] 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "subsystem": "scheduler", 00:15:09.251 "config": [ 00:15:09.251 { 00:15:09.251 "method": "framework_set_scheduler", 00:15:09.251 "params": { 00:15:09.251 "name": "static" 00:15:09.251 } 00:15:09.251 } 00:15:09.251 ] 00:15:09.251 }, 00:15:09.251 { 00:15:09.251 "subsystem": "nvmf", 00:15:09.251 "config": [ 00:15:09.251 { 00:15:09.251 "method": "nvmf_set_config", 00:15:09.251 "params": { 00:15:09.251 "admin_cmd_passthru": { 00:15:09.252 "identify_ctrlr": false 00:15:09.252 }, 00:15:09.252 "discovery_filter": "match_any" 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "nvmf_set_max_subsystems", 00:15:09.252 "params": { 00:15:09.252 "max_subsystems": 1024 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "nvmf_set_crdt", 00:15:09.252 "params": { 00:15:09.252 "crdt1": 0, 00:15:09.252 "crdt2": 0, 00:15:09.252 "crdt3": 0 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "nvmf_create_transport", 00:15:09.252 "params": { 00:15:09.252 "abort_timeout_sec": 1, 00:15:09.252 "ack_timeout": 0, 00:15:09.252 "buf_cache_size": 4294967295, 00:15:09.252 "c2h_success": false, 00:15:09.252 "data_wr_pool_size": 0, 00:15:09.252 "dif_insert_or_strip": false, 00:15:09.252 "in_capsule_data_size": 4096, 00:15:09.252 "io_unit_size": 131072, 00:15:09.252 "max_aq_depth": 128, 00:15:09.252 "max_io_qpairs_per_ctrlr": 127, 00:15:09.252 "max_io_size": 131072, 00:15:09.252 "max_queue_depth": 128, 00:15:09.252 "num_shared_buffers": 511, 00:15:09.252 "sock_priority": 0, 00:15:09.252 "trtype": "TCP", 00:15:09.252 "zcopy": false 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "nvmf_create_subsystem", 00:15:09.252 "params": { 00:15:09.252 "allow_any_host": false, 00:15:09.252 "ana_reporting": false, 00:15:09.252 "max_cntlid": 65519, 00:15:09.252 "max_namespaces": 32, 00:15:09.252 "min_cntlid": 1, 00:15:09.252 "model_number": "SPDK bdev Controller", 00:15:09.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.252 "serial_number": "00000000000000000000" 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "nvmf_subsystem_add_host", 00:15:09.252 "params": { 00:15:09.252 "host": "nqn.2016-06.io.spdk:host1", 00:15:09.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.252 "psk": "key0" 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "nvmf_subsystem_add_ns", 00:15:09.252 "params": { 00:15:09.252 "namespace": { 00:15:09.252 "bdev_name": "malloc0", 00:15:09.252 "nguid": "F0FD176375B24083A1447EDB1454AEC3", 00:15:09.252 "no_auto_visible": false, 00:15:09.252 "nsid": 1, 00:15:09.252 "uuid": "f0fd1763-75b2-4083-a144-7edb1454aec3" 00:15:09.252 }, 00:15:09.252 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "nvmf_subsystem_add_listener", 00:15:09.252 "params": { 00:15:09.252 "listen_address": { 00:15:09.252 "adrfam": "IPv4", 00:15:09.252 "traddr": "10.0.0.2", 00:15:09.252 "trsvcid": "4420", 00:15:09.252 "trtype": "TCP" 00:15:09.252 }, 00:15:09.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.252 "secure_channel": false, 00:15:09.252 "sock_impl": "ssl" 00:15:09.252 } 00:15:09.252 } 00:15:09.252 ] 00:15:09.252 } 00:15:09.252 ] 00:15:09.252 }' 00:15:09.252 18:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:09.510 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:15:09.510 "subsystems": [ 00:15:09.510 { 00:15:09.510 "subsystem": "keyring", 00:15:09.510 "config": [ 00:15:09.510 { 00:15:09.510 "method": "keyring_file_add_key", 00:15:09.510 "params": { 00:15:09.510 "name": "key0", 00:15:09.510 "path": "/tmp/tmp.q58C0ybhuD" 00:15:09.510 } 00:15:09.510 } 00:15:09.510 ] 00:15:09.510 }, 00:15:09.510 { 00:15:09.510 "subsystem": "iobuf", 00:15:09.510 "config": [ 00:15:09.510 { 00:15:09.510 "method": "iobuf_set_options", 00:15:09.510 "params": { 00:15:09.510 "large_bufsize": 135168, 00:15:09.510 "large_pool_count": 1024, 00:15:09.510 "small_bufsize": 8192, 00:15:09.510 "small_pool_count": 8192 00:15:09.510 } 00:15:09.510 } 00:15:09.510 ] 00:15:09.510 }, 00:15:09.510 { 00:15:09.510 "subsystem": "sock", 00:15:09.510 "config": [ 00:15:09.510 { 00:15:09.510 "method": "sock_set_default_impl", 00:15:09.510 "params": { 00:15:09.510 "impl_name": "posix" 00:15:09.510 } 00:15:09.510 }, 00:15:09.510 { 00:15:09.510 "method": "sock_impl_set_options", 00:15:09.510 "params": { 00:15:09.510 "enable_ktls": false, 00:15:09.510 "enable_placement_id": 0, 00:15:09.510 "enable_quickack": false, 00:15:09.510 "enable_recv_pipe": true, 00:15:09.510 "enable_zerocopy_send_client": false, 00:15:09.510 "enable_zerocopy_send_server": true, 00:15:09.510 "impl_name": "ssl", 00:15:09.510 "recv_buf_size": 4096, 00:15:09.510 "send_buf_size": 4096, 00:15:09.510 "tls_version": 0, 00:15:09.510 "zerocopy_threshold": 0 00:15:09.510 } 00:15:09.510 }, 00:15:09.510 { 00:15:09.510 "method": "sock_impl_set_options", 00:15:09.510 "params": { 00:15:09.510 "enable_ktls": false, 00:15:09.510 "enable_placement_id": 0, 00:15:09.510 "enable_quickack": false, 00:15:09.510 "enable_recv_pipe": true, 00:15:09.510 "enable_zerocopy_send_client": false, 00:15:09.510 "enable_zerocopy_send_server": true, 00:15:09.510 "impl_name": "posix", 00:15:09.510 "recv_buf_size": 2097152, 00:15:09.510 "send_buf_size": 2097152, 00:15:09.510 "tls_version": 0, 00:15:09.510 "zerocopy_threshold": 0 00:15:09.511 } 00:15:09.511 } 00:15:09.511 ] 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "subsystem": "vmd", 00:15:09.511 "config": [] 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "subsystem": "accel", 00:15:09.511 "config": [ 00:15:09.511 { 00:15:09.511 "method": "accel_set_options", 00:15:09.511 "params": { 00:15:09.511 "buf_count": 2048, 00:15:09.511 "large_cache_size": 16, 00:15:09.511 "sequence_count": 2048, 00:15:09.511 "small_cache_size": 128, 00:15:09.511 "task_count": 2048 00:15:09.511 } 00:15:09.511 } 00:15:09.511 ] 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "subsystem": "bdev", 00:15:09.511 "config": [ 00:15:09.511 { 00:15:09.511 "method": "bdev_set_options", 00:15:09.511 "params": { 00:15:09.511 "bdev_auto_examine": true, 00:15:09.511 "bdev_io_cache_size": 256, 00:15:09.511 "bdev_io_pool_size": 65535, 00:15:09.511 "iobuf_large_cache_size": 16, 00:15:09.511 "iobuf_small_cache_size": 128 00:15:09.511 } 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "method": "bdev_raid_set_options", 00:15:09.511 "params": { 00:15:09.511 "process_max_bandwidth_mb_sec": 0, 00:15:09.511 "process_window_size_kb": 1024 00:15:09.511 } 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "method": "bdev_iscsi_set_options", 00:15:09.511 "params": { 00:15:09.511 "timeout_sec": 30 00:15:09.511 } 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "method": "bdev_nvme_set_options", 00:15:09.511 "params": { 00:15:09.511 "action_on_timeout": "none", 00:15:09.511 "allow_accel_sequence": false, 00:15:09.511 "arbitration_burst": 0, 00:15:09.511 "bdev_retry_count": 3, 00:15:09.511 "ctrlr_loss_timeout_sec": 0, 00:15:09.511 "delay_cmd_submit": true, 00:15:09.511 "dhchap_dhgroups": [ 00:15:09.511 "null", 00:15:09.511 "ffdhe2048", 00:15:09.511 "ffdhe3072", 00:15:09.511 "ffdhe4096", 00:15:09.511 "ffdhe6144", 00:15:09.511 "ffdhe8192" 00:15:09.511 ], 00:15:09.511 "dhchap_digests": [ 00:15:09.511 "sha256", 00:15:09.511 "sha384", 00:15:09.511 "sha512" 00:15:09.511 ], 00:15:09.511 "disable_auto_failback": false, 00:15:09.511 "fast_io_fail_timeout_sec": 0, 00:15:09.511 "generate_uuids": false, 00:15:09.511 "high_priority_weight": 0, 00:15:09.511 "io_path_stat": false, 00:15:09.511 "io_queue_requests": 512, 00:15:09.511 "keep_alive_timeout_ms": 10000, 00:15:09.511 "low_priority_weight": 0, 00:15:09.511 "medium_priority_weight": 0, 00:15:09.511 "nvme_adminq_poll_period_us": 10000, 00:15:09.511 "nvme_error_stat": false, 00:15:09.511 "nvme_ioq_poll_period_us": 0, 00:15:09.511 "rdma_cm_event_timeout_ms": 0, 00:15:09.511 "rdma_max_cq_size": 0, 00:15:09.511 "rdma_srq_size": 0, 00:15:09.511 "reconnect_delay_sec": 0, 00:15:09.511 "timeout_admin_us": 0, 00:15:09.511 "timeout_us": 0, 00:15:09.511 "transport_ack_timeout": 0, 00:15:09.511 "transport_retry_count": 4, 00:15:09.511 "transport_tos": 0 00:15:09.511 } 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "method": "bdev_nvme_attach_controller", 00:15:09.511 "params": { 00:15:09.511 "adrfam": "IPv4", 00:15:09.511 "ctrlr_loss_timeout_sec": 0, 00:15:09.511 "ddgst": false, 00:15:09.511 "fast_io_fail_timeout_sec": 0, 00:15:09.511 "hdgst": false, 00:15:09.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.511 "name": "nvme0", 00:15:09.511 "prchk_guard": false, 00:15:09.511 "prchk_reftag": false, 00:15:09.511 "psk": "key0", 00:15:09.511 "reconnect_delay_sec": 0, 00:15:09.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.511 "traddr": "10.0.0.2", 00:15:09.511 "trsvcid": "4420", 00:15:09.511 "trtype": "TCP" 00:15:09.511 } 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "method": "bdev_nvme_set_hotplug", 00:15:09.511 "params": { 00:15:09.511 "enable": false, 00:15:09.511 "period_us": 100000 00:15:09.511 } 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "method": "bdev_enable_histogram", 00:15:09.511 "params": { 00:15:09.511 "enable": true, 00:15:09.511 "name": "nvme0n1" 00:15:09.511 } 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "method": "bdev_wait_for_examine" 00:15:09.511 } 00:15:09.511 ] 00:15:09.511 }, 00:15:09.511 { 00:15:09.511 "subsystem": "nbd", 00:15:09.511 "config": [] 00:15:09.511 } 00:15:09.511 ] 00:15:09.511 }' 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 84320 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84320 ']' 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84320 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84320 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:09.511 killing process with pid 84320 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84320' 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84320 00:15:09.511 Received shutdown signal, test time was about 1.000000 seconds 00:15:09.511 00:15:09.511 Latency(us) 00:15:09.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.511 =================================================================================================================== 00:15:09.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.511 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84320 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 84270 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84270 ']' 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84270 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84270 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.770 killing process with pid 84270 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84270' 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84270 00:15:09.770 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84270 00:15:10.029 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:15:10.029 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.029 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:10.029 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:15:10.029 "subsystems": [ 00:15:10.029 { 00:15:10.029 "subsystem": "keyring", 00:15:10.029 "config": [ 00:15:10.029 { 00:15:10.029 "method": "keyring_file_add_key", 00:15:10.029 "params": { 00:15:10.029 "name": "key0", 00:15:10.029 "path": "/tmp/tmp.q58C0ybhuD" 00:15:10.029 } 00:15:10.029 } 00:15:10.029 ] 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "subsystem": "iobuf", 00:15:10.029 "config": [ 00:15:10.029 { 00:15:10.029 "method": "iobuf_set_options", 00:15:10.029 "params": { 00:15:10.029 "large_bufsize": 135168, 00:15:10.029 "large_pool_count": 1024, 00:15:10.029 "small_bufsize": 8192, 00:15:10.029 "small_pool_count": 8192 00:15:10.029 } 00:15:10.029 } 00:15:10.029 ] 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "subsystem": "sock", 00:15:10.029 "config": [ 00:15:10.029 { 00:15:10.029 "method": "sock_set_default_impl", 00:15:10.029 "params": { 00:15:10.029 "impl_name": "posix" 00:15:10.029 } 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "method": "sock_impl_set_options", 00:15:10.029 "params": { 00:15:10.029 "enable_ktls": false, 00:15:10.029 "enable_placement_id": 0, 00:15:10.029 "enable_quickack": false, 00:15:10.029 "enable_recv_pipe": true, 00:15:10.029 "enable_zerocopy_send_client": false, 00:15:10.029 "enable_zerocopy_send_server": true, 00:15:10.029 "impl_name": "ssl", 00:15:10.029 "recv_buf_size": 4096, 00:15:10.029 "send_buf_size": 4096, 00:15:10.029 "tls_version": 0, 00:15:10.029 "zerocopy_threshold": 0 00:15:10.029 } 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "method": "sock_impl_set_options", 00:15:10.029 "params": { 00:15:10.029 "enable_ktls": false, 00:15:10.029 "enable_placement_id": 0, 00:15:10.029 "enable_quickack": false, 00:15:10.029 "enable_recv_pipe": true, 00:15:10.029 "enable_zerocopy_send_client": false, 00:15:10.029 "enable_zerocopy_send_server": true, 00:15:10.029 "impl_name": "posix", 00:15:10.029 "recv_buf_size": 2097152, 00:15:10.029 "send_buf_size": 2097152, 00:15:10.029 "tls_version": 0, 00:15:10.029 "zerocopy_threshold": 0 00:15:10.029 } 00:15:10.029 } 00:15:10.029 ] 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "subsystem": "vmd", 00:15:10.029 "config": [] 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "subsystem": "accel", 00:15:10.029 "config": [ 00:15:10.029 { 00:15:10.029 "method": "accel_set_options", 00:15:10.029 "params": { 00:15:10.029 "buf_count": 2048, 00:15:10.029 "large_cache_size": 16, 00:15:10.029 "sequence_count": 2048, 00:15:10.029 "small_cache_size": 128, 00:15:10.029 "task_count": 2048 00:15:10.029 } 00:15:10.029 } 00:15:10.029 ] 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "subsystem": "bdev", 00:15:10.029 "config": [ 00:15:10.029 { 00:15:10.029 "method": "bdev_set_options", 00:15:10.029 "params": { 00:15:10.029 "bdev_auto_examine": true, 00:15:10.029 "bdev_io_cache_size": 256, 00:15:10.029 "bdev_io_pool_size": 65535, 00:15:10.029 "iobuf_large_cache_size": 16, 00:15:10.029 "iobuf_small_cache_size": 128 00:15:10.029 } 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "method": "bdev_raid_set_options", 00:15:10.029 "params": { 00:15:10.029 "process_max_bandwidth_mb_sec": 0, 00:15:10.029 "process_window_size_kb": 1024 00:15:10.029 } 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "method": "bdev_iscsi_set_options", 00:15:10.029 "params": { 00:15:10.029 "timeout_sec": 30 00:15:10.029 } 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "method": "bdev_nvme_set_options", 00:15:10.029 "params": { 00:15:10.029 "action_on_timeout": "none", 00:15:10.029 "allow_accel_sequence": false, 00:15:10.029 "arbitration_burst": 0, 00:15:10.029 "bdev_retry_count": 3, 00:15:10.029 "ctrlr_loss_timeout_sec": 0, 00:15:10.029 "delay_cmd_submit": true, 00:15:10.029 "dhchap_dhgroups": [ 00:15:10.029 "null", 00:15:10.029 "ffdhe2048", 00:15:10.029 "ffdhe3072", 00:15:10.029 "ffdhe4096", 00:15:10.029 "ffdhe6144", 00:15:10.029 "ffdhe8192" 00:15:10.029 ], 00:15:10.029 "dhchap_digests": [ 00:15:10.029 "sha256", 00:15:10.029 "sha384", 00:15:10.029 "sha512" 00:15:10.029 ], 00:15:10.029 "disable_auto_failback": false, 00:15:10.029 "fast_io_fail_timeout_sec": 0, 00:15:10.029 "generate_uuids": false, 00:15:10.029 "high_priority_weight": 0, 00:15:10.029 "io_path_stat": false, 00:15:10.029 "io_queue_requests": 0, 00:15:10.029 "keep_alive_timeout_ms": 10000, 00:15:10.029 "low_priority_weight": 0, 00:15:10.029 "medium_priority_weight": 0, 00:15:10.029 "nvme_adminq_poll_period_us": 10000, 00:15:10.029 "nvme_error_stat": false, 00:15:10.029 "nvme_ioq_poll_period_us": 0, 00:15:10.029 "rdma_cm_event_timeout_ms": 0, 00:15:10.029 "rdma_max_cq_size": 0, 00:15:10.029 "rdma_srq_size": 0, 00:15:10.029 "reconnect_delay_sec": 0, 00:15:10.029 "timeout_admin_us": 0, 00:15:10.029 "timeout_us": 0, 00:15:10.029 "transport_ack_timeout": 0, 00:15:10.029 "transport_retry_count": 4, 00:15:10.029 "transport_tos": 0 00:15:10.029 } 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "method": "bdev_nvme_set_hotplug", 00:15:10.029 "params": { 00:15:10.029 "enable": false, 00:15:10.029 "period_us": 100000 00:15:10.029 } 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "method": "bdev_malloc_create", 00:15:10.029 "params": { 00:15:10.029 "block_size": 4096, 00:15:10.029 "dif_is_head_of_md": false, 00:15:10.029 "dif_pi_format": 0, 00:15:10.029 "dif_type": 0, 00:15:10.029 "md_size": 0, 00:15:10.029 "name": "malloc0", 00:15:10.029 "num_blocks": 8192, 00:15:10.029 "optimal_io_boundary": 0, 00:15:10.029 "physical_block_size": 4096, 00:15:10.030 "uuid": "f0fd1763-75b2-4083-a144-7edb1454aec3" 00:15:10.030 } 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "method": "bdev_wait_for_examine" 00:15:10.030 } 00:15:10.030 ] 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "subsystem": "nbd", 00:15:10.030 "config": [] 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "subsystem": "scheduler", 00:15:10.030 "config": [ 00:15:10.030 { 00:15:10.030 "method": "framework_set_scheduler", 00:15:10.030 "params": { 00:15:10.030 "name": "static" 00:15:10.030 } 00:15:10.030 } 00:15:10.030 ] 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "subsystem": "nvmf", 00:15:10.030 "config": [ 00:15:10.030 { 00:15:10.030 "method": "nvmf_set_config", 00:15:10.030 "params": { 00:15:10.030 "admin_cmd_passthru": { 00:15:10.030 "identify_ctrlr": false 00:15:10.030 }, 00:15:10.030 "discovery_filter": "match_any" 00:15:10.030 } 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "method": "nvmf_set_max_subsystems", 00:15:10.030 "params": { 00:15:10.030 "max_subsystems": 1024 00:15:10.030 } 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "method": "nvmf_set_crdt", 00:15:10.030 "params": { 00:15:10.030 "crdt1": 0, 00:15:10.030 "crdt2": 0, 00:15:10.030 "crdt3": 0 00:15:10.030 } 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "method": "nvmf_create_transport", 00:15:10.030 "params": { 00:15:10.030 "abort_timeout_sec": 1, 00:15:10.030 "ack_timeout": 0, 00:15:10.030 "buf_cache_size": 4294967295, 00:15:10.030 "c2h_success": false, 00:15:10.030 "data_wr_pool_size": 0, 00:15:10.030 "dif_insert_or_strip": false, 00:15:10.030 "in_capsule_data_size": 4096, 00:15:10.030 "io_unit_size": 131072, 00:15:10.030 "max_aq_depth": 128, 00:15:10.030 "max_io_qpairs_per_ctrlr": 127, 00:15:10.030 "max_io_size": 131072, 00:15:10.030 "max_queue_depth": 128, 00:15:10.030 "num_shared_buffers": 511, 00:15:10.030 "sock_priority": 0, 00:15:10.030 "trtype": "TCP", 00:15:10.030 "zcopy": false 00:15:10.030 } 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "method": "nvmf_create_subsystem", 00:15:10.030 "params": { 00:15:10.030 "allow_any_host": false, 00:15:10.030 "ana_reporting": false, 00:15:10.030 "max_cntlid": 65519, 00:15:10.030 "max_namespaces": 32, 00:15:10.030 "min_cntlid": 1, 00:15:10.030 "model_number": "SPDK bdev Controller", 00:15:10.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.030 "serial_number": "00000000000000000000" 00:15:10.030 } 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "method": "nvmf_subsystem_add_host", 00:15:10.030 "params": { 00:15:10.030 "host": "nqn.2016-06.io.spdk:host1", 00:15:10.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.030 "psk": "key0" 00:15:10.030 } 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "method": "nvmf_subsystem_add_ns", 00:15:10.030 "params": { 00:15:10.030 "namespace": { 00:15:10.030 "bdev_name": "malloc0", 00:15:10.030 "nguid": "F0FD176375B24083A1447EDB1454AEC3", 00:15:10.030 "no_auto_visible": false, 00:15:10.030 "nsid": 1, 00:15:10.030 "uuid": "f0fd1763-75b2-4083-a144-7edb1454aec3" 00:15:10.030 }, 00:15:10.030 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:10.030 } 00:15:10.030 }, 00:15:10.030 { 00:15:10.030 "method": "nvmf_subsystem_add_listener", 00:15:10.030 "params": { 00:15:10.030 "listen_address": { 00:15:10.030 "adrfam": "IPv4", 00:15:10.030 "traddr": "10.0.0.2", 00:15:10.030 "trsvcid": "4420", 00:15:10.030 "trtype": "TCP" 00:15:10.030 }, 00:15:10.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.030 "secure_channel": false, 00:15:10.030 "sock_impl": "ssl" 00:15:10.030 } 00:15:10.030 } 00:15:10.030 ] 00:15:10.030 } 00:15:10.030 ] 00:15:10.030 }' 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84397 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84397 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84397 ']' 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.030 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 [2024-07-24 18:02:16.904956] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:10.030 [2024-07-24 18:02:16.905074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.289 [2024-07-24 18:02:17.041616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.289 [2024-07-24 18:02:17.178955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.289 [2024-07-24 18:02:17.179045] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.289 [2024-07-24 18:02:17.179065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.289 [2024-07-24 18:02:17.179079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.289 [2024-07-24 18:02:17.179092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.289 [2024-07-24 18:02:17.179207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.547 [2024-07-24 18:02:17.401421] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.547 [2024-07-24 18:02:17.433375] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:10.547 [2024-07-24 18:02:17.433621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=84441 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 84441 /var/tmp/bdevperf.sock 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84441 ']' 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:11.115 18:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:15:11.115 "subsystems": [ 00:15:11.115 { 00:15:11.115 "subsystem": "keyring", 00:15:11.115 "config": [ 00:15:11.115 { 00:15:11.115 "method": "keyring_file_add_key", 00:15:11.115 "params": { 00:15:11.115 "name": "key0", 00:15:11.115 "path": "/tmp/tmp.q58C0ybhuD" 00:15:11.115 } 00:15:11.115 } 00:15:11.115 ] 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "subsystem": "iobuf", 00:15:11.115 "config": [ 00:15:11.115 { 00:15:11.115 "method": "iobuf_set_options", 00:15:11.115 "params": { 00:15:11.115 "large_bufsize": 135168, 00:15:11.115 "large_pool_count": 1024, 00:15:11.115 "small_bufsize": 8192, 00:15:11.115 "small_pool_count": 8192 00:15:11.115 } 00:15:11.115 } 00:15:11.115 ] 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "subsystem": "sock", 00:15:11.115 "config": [ 00:15:11.115 { 00:15:11.115 "method": "sock_set_default_impl", 00:15:11.115 "params": { 00:15:11.115 "impl_name": "posix" 00:15:11.115 } 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "method": "sock_impl_set_options", 00:15:11.115 "params": { 00:15:11.115 "enable_ktls": false, 00:15:11.115 "enable_placement_id": 0, 00:15:11.115 "enable_quickack": false, 00:15:11.115 "enable_recv_pipe": true, 00:15:11.115 "enable_zerocopy_send_client": false, 00:15:11.115 "enable_zerocopy_send_server": true, 00:15:11.115 "impl_name": "ssl", 00:15:11.115 "recv_buf_size": 4096, 00:15:11.115 "send_buf_size": 4096, 00:15:11.115 "tls_version": 0, 00:15:11.115 "zerocopy_threshold": 0 00:15:11.115 } 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "method": "sock_impl_set_options", 00:15:11.115 "params": { 00:15:11.115 "enable_ktls": false, 00:15:11.115 "enable_placement_id": 0, 00:15:11.115 "enable_quickack": false, 00:15:11.115 "enable_recv_pipe": true, 00:15:11.115 "enable_zerocopy_send_client": false, 00:15:11.115 "enable_zerocopy_send_server": true, 00:15:11.115 "impl_name": "posix", 00:15:11.115 "recv_buf_size": 2097152, 00:15:11.115 "send_buf_size": 2097152, 00:15:11.115 "tls_version": 0, 00:15:11.115 "zerocopy_threshold": 0 00:15:11.115 } 00:15:11.115 } 00:15:11.115 ] 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "subsystem": "vmd", 00:15:11.115 "config": [] 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "subsystem": "accel", 00:15:11.115 "config": [ 00:15:11.115 { 00:15:11.115 "method": "accel_set_options", 00:15:11.115 "params": { 00:15:11.115 "buf_count": 2048, 00:15:11.115 "large_cache_size": 16, 00:15:11.115 "sequence_count": 2048, 00:15:11.115 "small_cache_size": 128, 00:15:11.115 "task_count": 2048 00:15:11.115 } 00:15:11.115 } 00:15:11.115 ] 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "subsystem": "bdev", 00:15:11.115 "config": [ 00:15:11.115 { 00:15:11.115 "method": "bdev_set_options", 00:15:11.115 "params": { 00:15:11.115 "bdev_auto_examine": true, 00:15:11.115 "bdev_io_cache_size": 256, 00:15:11.115 "bdev_io_pool_size": 65535, 00:15:11.115 "iobuf_large_cache_size": 16, 00:15:11.115 "iobuf_small_cache_size": 128 00:15:11.115 } 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "method": "bdev_raid_set_options", 00:15:11.115 "params": { 00:15:11.115 "process_max_bandwidth_mb_sec": 0, 00:15:11.115 "process_window_size_kb": 1024 00:15:11.115 } 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "method": "bdev_iscsi_set_options", 00:15:11.115 "params": { 00:15:11.115 "timeout_sec": 30 00:15:11.115 } 00:15:11.115 }, 00:15:11.115 { 00:15:11.115 "method": "bdev_nvme_set_options", 00:15:11.115 "params": { 00:15:11.115 "action_on_timeout": "none", 00:15:11.115 "allow_accel_sequence": false, 00:15:11.115 "arbitration_burst": 0, 00:15:11.115 "bdev_retry_count": 3, 00:15:11.115 "ctrlr_loss_timeout_sec": 0, 00:15:11.115 "delay_cmd_submit": true, 00:15:11.115 "dhchap_dhgroups": [ 00:15:11.115 "null", 00:15:11.115 "ffdhe2048", 00:15:11.115 "ffdhe3072", 00:15:11.115 "ffdhe4096", 00:15:11.115 "ffdhe6144", 00:15:11.115 "ffdhe8192" 00:15:11.115 ], 00:15:11.116 "dhchap_digests": [ 00:15:11.116 "sha256", 00:15:11.116 "sha384", 00:15:11.116 "sha512" 00:15:11.116 ], 00:15:11.116 "disable_auto_failback": false, 00:15:11.116 "fast_io_fail_timeout_sec": 0, 00:15:11.116 "generate_uuids": false, 00:15:11.116 "high_priority_weight": 0, 00:15:11.116 "io_path_stat": false, 00:15:11.116 "io_queue_requests": 512, 00:15:11.116 "keep_alive_timeout_ms": 10000, 00:15:11.116 "low_priority_weight": 0, 00:15:11.116 "medium_priority_weight": 0, 00:15:11.116 "nvme_adminq_poll_period_us": 10000, 00:15:11.116 "nvme_error_stat": false, 00:15:11.116 "nvme_ioq_poll_period_us": 0, 00:15:11.116 "rdma_cm_event_timeout_ms": 0, 00:15:11.116 "rdma_max_cq_size": 0, 00:15:11.116 "rdma_srq_size": 0, 00:15:11.116 "reconnect_delay_sec": 0, 00:15:11.116 "timeout_admin_us": 0, 00:15:11.116 "timeout_us": 0, 00:15:11.116 "transport_ack_timeout": 0, 00:15:11.116 "transport_retry_count": 4, 00:15:11.116 "transport_tos": 0 00:15:11.116 } 00:15:11.116 }, 00:15:11.116 { 00:15:11.116 "method": "bdev_nvme_attach_controller", 00:15:11.116 "params": { 00:15:11.116 "adrfam": "IPv4", 00:15:11.116 "ctrlr_loss_timeout_sec": 0, 00:15:11.116 "ddgst": false, 00:15:11.116 "fast_io_fail_timeout_sec": 0, 00:15:11.116 "hdgst": false, 00:15:11.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.116 "name": "nvme0", 00:15:11.116 "prchk_guard": false, 00:15:11.116 "prchk_reftag": false, 00:15:11.116 "psk": "key0", 00:15:11.116 "reconnect_delay_sec": 0, 00:15:11.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.116 "traddr": "10.0.0.2", 00:15:11.116 "trsvcid": "4420", 00:15:11.116 "trtype": "TCP" 00:15:11.116 } 00:15:11.116 }, 00:15:11.116 { 00:15:11.116 "method": "bdev_nvme_set_hotplug", 00:15:11.116 "params": { 00:15:11.116 "enable": false, 00:15:11.116 "period_us": 100000 00:15:11.116 } 00:15:11.116 }, 00:15:11.116 { 00:15:11.116 "method": "bdev_enable_histogram", 00:15:11.116 "params": { 00:15:11.116 "enable": true, 00:15:11.116 "name": "nvme0n1" 00:15:11.116 } 00:15:11.116 }, 00:15:11.116 { 00:15:11.116 "method": "bdev_wait_for_examine" 00:15:11.116 } 00:15:11.116 ] 00:15:11.116 }, 00:15:11.116 { 00:15:11.116 "subsystem": "nbd", 00:15:11.116 "config": [] 00:15:11.116 } 00:15:11.116 ] 00:15:11.116 }' 00:15:11.116 [2024-07-24 18:02:17.963833] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:11.116 [2024-07-24 18:02:17.963943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84441 ] 00:15:11.374 [2024-07-24 18:02:18.110959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.374 [2024-07-24 18:02:18.292440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.632 [2024-07-24 18:02:18.510389] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.198 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.198 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:12.198 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:12.198 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:15:12.455 18:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.455 18:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:12.455 Running I/O for 1 seconds... 00:15:13.451 00:15:13.451 Latency(us) 00:15:13.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.451 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:13.451 Verification LBA range: start 0x0 length 0x2000 00:15:13.451 nvme0n1 : 1.02 4342.21 16.96 0.00 0.00 29189.74 6522.39 24716.43 00:15:13.451 =================================================================================================================== 00:15:13.451 Total : 4342.21 16.96 0.00 0.00 29189.74 6522.39 24716.43 00:15:13.451 0 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:13.451 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:13.451 nvmf_trace.0 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84441 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84441 ']' 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84441 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84441 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:13.709 killing process with pid 84441 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84441' 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84441 00:15:13.709 Received shutdown signal, test time was about 1.000000 seconds 00:15:13.709 00:15:13.709 Latency(us) 00:15:13.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.709 =================================================================================================================== 00:15:13.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.709 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84441 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.967 rmmod nvme_tcp 00:15:13.967 rmmod nvme_fabrics 00:15:13.967 rmmod nvme_keyring 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 84397 ']' 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 84397 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84397 ']' 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84397 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84397 00:15:13.967 killing process with pid 84397 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84397' 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84397 00:15:13.967 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84397 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Ib7gIzbgTM /tmp/tmp.XCsBoiBMSN /tmp/tmp.q58C0ybhuD 00:15:14.225 00:15:14.225 real 1m26.192s 00:15:14.225 user 2m12.493s 00:15:14.225 sys 0m30.900s 00:15:14.225 ************************************ 00:15:14.225 END TEST nvmf_tls 00:15:14.225 ************************************ 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.225 ************************************ 00:15:14.225 START TEST nvmf_fips 00:15:14.225 ************************************ 00:15:14.225 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:14.225 * Looking for test storage... 00:15:14.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:14.485 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:14.486 Error setting digest 00:15:14.486 00B282C27E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:14.486 00B282C27E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:14.486 Cannot find device "nvmf_tgt_br" 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.486 Cannot find device "nvmf_tgt_br2" 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:14.486 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:14.745 Cannot find device "nvmf_tgt_br" 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:14.745 Cannot find device "nvmf_tgt_br2" 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:14.745 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:15.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:15.004 00:15:15.004 --- 10.0.0.2 ping statistics --- 00:15:15.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.004 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:15.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:15.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:15.004 00:15:15.004 --- 10.0.0.3 ping statistics --- 00:15:15.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.004 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:15.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:15.004 00:15:15.004 --- 10.0.0.1 ping statistics --- 00:15:15.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.004 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=84723 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 84723 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84723 ']' 00:15:15.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.004 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.005 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.005 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.005 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:15.005 [2024-07-24 18:02:21.873733] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:15.005 [2024-07-24 18:02:21.874028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.264 [2024-07-24 18:02:22.022551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.264 [2024-07-24 18:02:22.148629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.264 [2024-07-24 18:02:22.148696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.264 [2024-07-24 18:02:22.148711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.264 [2024-07-24 18:02:22.148724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.264 [2024-07-24 18:02:22.148735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.264 [2024-07-24 18:02:22.148775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:16.200 18:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.457 [2024-07-24 18:02:23.175471] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.457 [2024-07-24 18:02:23.191419] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:16.457 [2024-07-24 18:02:23.191632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.457 [2024-07-24 18:02:23.220870] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:16.457 malloc0 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=84780 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 84780 /var/tmp/bdevperf.sock 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84780 ']' 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.457 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:16.457 [2024-07-24 18:02:23.374355] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:16.457 [2024-07-24 18:02:23.374729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84780 ] 00:15:16.714 [2024-07-24 18:02:23.514784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.714 [2024-07-24 18:02:23.623453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.378 18:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.378 18:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:17.378 18:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:17.637 [2024-07-24 18:02:24.521624] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.637 [2024-07-24 18:02:24.521739] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:17.637 TLSTESTn1 00:15:17.896 18:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:17.896 Running I/O for 10 seconds... 00:15:27.917 00:15:27.917 Latency(us) 00:15:27.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.917 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:27.917 Verification LBA range: start 0x0 length 0x2000 00:15:27.917 TLSTESTn1 : 10.02 3886.25 15.18 0.00 0.00 32874.45 7146.54 38198.13 00:15:27.917 =================================================================================================================== 00:15:27.917 Total : 3886.25 15.18 0.00 0.00 32874.45 7146.54 38198.13 00:15:27.917 0 00:15:27.917 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:27.917 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:27.917 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:27.917 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:27.917 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:27.917 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:27.918 nvmf_trace.0 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84780 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84780 ']' 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84780 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.918 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84780 00:15:28.176 killing process with pid 84780 00:15:28.176 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.176 00:15:28.176 Latency(us) 00:15:28.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.176 =================================================================================================================== 00:15:28.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.176 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:28.176 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:28.176 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84780' 00:15:28.176 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84780 00:15:28.176 [2024-07-24 18:02:34.904802] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:28.176 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84780 00:15:28.176 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:28.176 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.176 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:28.176 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.176 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:28.176 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.176 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.176 rmmod nvme_tcp 00:15:28.434 rmmod nvme_fabrics 00:15:28.434 rmmod nvme_keyring 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 84723 ']' 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 84723 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84723 ']' 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84723 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84723 00:15:28.434 killing process with pid 84723 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84723' 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84723 00:15:28.434 [2024-07-24 18:02:35.238127] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:28.434 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84723 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:28.693 00:15:28.693 real 0m14.374s 00:15:28.693 user 0m18.701s 00:15:28.693 sys 0m6.298s 00:15:28.693 ************************************ 00:15:28.693 END TEST nvmf_fips 00:15:28.693 ************************************ 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:15:28.693 00:15:28.693 real 6m26.868s 00:15:28.693 user 15m19.423s 00:15:28.693 sys 1m34.793s 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.693 ************************************ 00:15:28.693 END TEST nvmf_target_extra 00:15:28.693 18:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.693 ************************************ 00:15:28.693 18:02:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:28.693 18:02:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:28.693 18:02:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.693 18:02:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.693 ************************************ 00:15:28.693 START TEST nvmf_host 00:15:28.693 ************************************ 00:15:28.693 18:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:28.693 * Looking for test storage... 00:15:28.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.954 ************************************ 00:15:28.954 START TEST nvmf_multicontroller 00:15:28.954 ************************************ 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:15:28.954 * Looking for test storage... 00:15:28.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.954 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:28.955 Cannot find device "nvmf_tgt_br" 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.955 Cannot find device "nvmf_tgt_br2" 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:28.955 Cannot find device "nvmf_tgt_br" 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:28.955 Cannot find device "nvmf_tgt_br2" 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:15:28.955 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:29.214 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:29.214 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.214 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:15:29.214 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.215 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:15:29.215 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.215 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.215 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.215 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.215 18:02:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:29.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:15:29.215 00:15:29.215 --- 10.0.0.2 ping statistics --- 00:15:29.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.215 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:29.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:29.215 00:15:29.215 --- 10.0.0.3 ping statistics --- 00:15:29.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.215 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:29.215 00:15:29.215 --- 10.0.0.1 ping statistics --- 00:15:29.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.215 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.215 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85177 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85177 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85177 ']' 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.474 18:02:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:29.474 [2024-07-24 18:02:36.261238] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:29.474 [2024-07-24 18:02:36.261375] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.475 [2024-07-24 18:02:36.407372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:29.733 [2024-07-24 18:02:36.528121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.733 [2024-07-24 18:02:36.528192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.733 [2024-07-24 18:02:36.528208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.733 [2024-07-24 18:02:36.528228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.733 [2024-07-24 18:02:36.528252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.733 [2024-07-24 18:02:36.528427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.733 [2024-07-24 18:02:36.528579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.733 [2024-07-24 18:02:36.528586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.297 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.297 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:15:30.298 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.298 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.298 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.556 [2024-07-24 18:02:37.322595] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.556 Malloc0 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.556 [2024-07-24 18:02:37.405237] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.556 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.557 [2024-07-24 18:02:37.413119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.557 Malloc1 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85229 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85229 /var/tmp/bdevperf.sock 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85229 ']' 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.557 18:02:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.933 NVMe0n1 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.933 1 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.933 2024/07/24 18:02:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:31.933 request: 00:15:31.933 { 00:15:31.933 "method": "bdev_nvme_attach_controller", 00:15:31.933 "params": { 00:15:31.933 "name": "NVMe0", 00:15:31.933 "trtype": "tcp", 00:15:31.933 "traddr": "10.0.0.2", 00:15:31.933 "adrfam": "ipv4", 00:15:31.933 "trsvcid": "4420", 00:15:31.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.933 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:15:31.933 "hostaddr": "10.0.0.2", 00:15:31.933 "hostsvcid": "60000", 00:15:31.933 "prchk_reftag": false, 00:15:31.933 "prchk_guard": false, 00:15:31.933 "hdgst": false, 00:15:31.933 "ddgst": false 00:15:31.933 } 00:15:31.933 } 00:15:31.933 Got JSON-RPC error response 00:15:31.933 GoRPCClient: error on JSON-RPC call 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:31.933 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.934 2024/07/24 18:02:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:31.934 request: 00:15:31.934 { 00:15:31.934 "method": "bdev_nvme_attach_controller", 00:15:31.934 "params": { 00:15:31.934 "name": "NVMe0", 00:15:31.934 "trtype": "tcp", 00:15:31.934 "traddr": "10.0.0.2", 00:15:31.934 "adrfam": "ipv4", 00:15:31.934 "trsvcid": "4420", 00:15:31.934 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:31.934 "hostaddr": "10.0.0.2", 00:15:31.934 "hostsvcid": "60000", 00:15:31.934 "prchk_reftag": false, 00:15:31.934 "prchk_guard": false, 00:15:31.934 "hdgst": false, 00:15:31.934 "ddgst": false 00:15:31.934 } 00:15:31.934 } 00:15:31.934 Got JSON-RPC error response 00:15:31.934 GoRPCClient: error on JSON-RPC call 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.934 2024/07/24 18:02:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:15:31.934 request: 00:15:31.934 { 00:15:31.934 "method": "bdev_nvme_attach_controller", 00:15:31.934 "params": { 00:15:31.934 "name": "NVMe0", 00:15:31.934 "trtype": "tcp", 00:15:31.934 "traddr": "10.0.0.2", 00:15:31.934 "adrfam": "ipv4", 00:15:31.934 "trsvcid": "4420", 00:15:31.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.934 "hostaddr": "10.0.0.2", 00:15:31.934 "hostsvcid": "60000", 00:15:31.934 "prchk_reftag": false, 00:15:31.934 "prchk_guard": false, 00:15:31.934 "hdgst": false, 00:15:31.934 "ddgst": false, 00:15:31.934 "multipath": "disable" 00:15:31.934 } 00:15:31.934 } 00:15:31.934 Got JSON-RPC error response 00:15:31.934 GoRPCClient: error on JSON-RPC call 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.934 2024/07/24 18:02:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:31.934 request: 00:15:31.934 { 00:15:31.934 "method": "bdev_nvme_attach_controller", 00:15:31.934 "params": { 00:15:31.934 "name": "NVMe0", 00:15:31.934 "trtype": "tcp", 00:15:31.934 "traddr": "10.0.0.2", 00:15:31.934 "adrfam": "ipv4", 00:15:31.934 "trsvcid": "4420", 00:15:31.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.934 "hostaddr": "10.0.0.2", 00:15:31.934 "hostsvcid": "60000", 00:15:31.934 "prchk_reftag": false, 00:15:31.934 "prchk_guard": false, 00:15:31.934 "hdgst": false, 00:15:31.934 "ddgst": false, 00:15:31.934 "multipath": "failover" 00:15:31.934 } 00:15:31.934 } 00:15:31.934 Got JSON-RPC error response 00:15:31.934 GoRPCClient: error on JSON-RPC call 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.934 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.934 00:15:31.934 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.935 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:31.935 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.935 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:31.935 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:15:31.935 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.935 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:15:31.935 18:02:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:33.309 0 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85229 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85229 ']' 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85229 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.310 18:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85229 00:15:33.310 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.310 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.310 killing process with pid 85229 00:15:33.310 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85229' 00:15:33.310 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85229 00:15:33.310 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85229 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:33.568 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:15:33.569 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:15:33.569 [2024-07-24 18:02:37.542163] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:33.569 [2024-07-24 18:02:37.542480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85229 ] 00:15:33.569 [2024-07-24 18:02:37.679608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.569 [2024-07-24 18:02:37.841965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.569 [2024-07-24 18:02:38.820958] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 75a486e8-8ad2-4523-bb64-bd37bd26a147 already exists 00:15:33.569 [2024-07-24 18:02:38.821064] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:75a486e8-8ad2-4523-bb64-bd37bd26a147 alias for bdev NVMe1n1 00:15:33.569 [2024-07-24 18:02:38.821084] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:15:33.569 Running I/O for 1 seconds... 00:15:33.569 00:15:33.569 Latency(us) 00:15:33.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.569 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:15:33.569 NVMe0n1 : 1.01 21052.90 82.24 0.00 0.00 6065.26 2808.69 10922.67 00:15:33.569 =================================================================================================================== 00:15:33.569 Total : 21052.90 82.24 0.00 0.00 6065.26 2808.69 10922.67 00:15:33.569 Received shutdown signal, test time was about 1.000000 seconds 00:15:33.569 00:15:33.569 Latency(us) 00:15:33.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.569 =================================================================================================================== 00:15:33.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.569 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.569 rmmod nvme_tcp 00:15:33.569 rmmod nvme_fabrics 00:15:33.569 rmmod nvme_keyring 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85177 ']' 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85177 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85177 ']' 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85177 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85177 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:33.569 killing process with pid 85177 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85177' 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85177 00:15:33.569 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85177 00:15:33.828 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:33.828 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:33.828 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:33.828 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.828 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:33.828 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.828 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.828 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:34.087 00:15:34.087 real 0m5.122s 00:15:34.087 user 0m15.720s 00:15:34.087 sys 0m1.334s 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:34.087 ************************************ 00:15:34.087 END TEST nvmf_multicontroller 00:15:34.087 ************************************ 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.087 ************************************ 00:15:34.087 START TEST nvmf_aer 00:15:34.087 ************************************ 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:15:34.087 * Looking for test storage... 00:15:34.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.087 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.088 18:02:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:34.088 Cannot find device "nvmf_tgt_br" 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # true 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.088 Cannot find device "nvmf_tgt_br2" 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # true 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:34.088 Cannot find device "nvmf_tgt_br" 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # true 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:34.088 Cannot find device "nvmf_tgt_br2" 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # true 00:15:34.088 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:34.378 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:34.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:15:34.379 00:15:34.379 --- 10.0.0.2 ping statistics --- 00:15:34.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.379 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:34.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:15:34.379 00:15:34.379 --- 10.0.0.3 ping statistics --- 00:15:34.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.379 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:34.379 00:15:34.379 --- 10.0.0.1 ping statistics --- 00:15:34.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.379 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=85482 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 85482 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 85482 ']' 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.379 18:02:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:34.637 [2024-07-24 18:02:41.425701] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:34.637 [2024-07-24 18:02:41.425878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.637 [2024-07-24 18:02:41.581041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.898 [2024-07-24 18:02:41.745182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.898 [2024-07-24 18:02:41.745278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.898 [2024-07-24 18:02:41.745291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.898 [2024-07-24 18:02:41.745301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.898 [2024-07-24 18:02:41.745310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.898 [2024-07-24 18:02:41.745545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.898 [2024-07-24 18:02:41.746239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.898 [2024-07-24 18:02:41.746462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.898 [2024-07-24 18:02:41.746463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.466 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.466 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:15:35.466 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:35.467 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.467 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.725 [2024-07-24 18:02:42.471936] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.725 Malloc0 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.725 [2024-07-24 18:02:42.557399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.725 [ 00:15:35.725 { 00:15:35.725 "allow_any_host": true, 00:15:35.725 "hosts": [], 00:15:35.725 "listen_addresses": [], 00:15:35.725 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.725 "subtype": "Discovery" 00:15:35.725 }, 00:15:35.725 { 00:15:35.725 "allow_any_host": true, 00:15:35.725 "hosts": [], 00:15:35.725 "listen_addresses": [ 00:15:35.725 { 00:15:35.725 "adrfam": "IPv4", 00:15:35.725 "traddr": "10.0.0.2", 00:15:35.725 "trsvcid": "4420", 00:15:35.725 "trtype": "TCP" 00:15:35.725 } 00:15:35.725 ], 00:15:35.725 "max_cntlid": 65519, 00:15:35.725 "max_namespaces": 2, 00:15:35.725 "min_cntlid": 1, 00:15:35.725 "model_number": "SPDK bdev Controller", 00:15:35.725 "namespaces": [ 00:15:35.725 { 00:15:35.725 "bdev_name": "Malloc0", 00:15:35.725 "name": "Malloc0", 00:15:35.725 "nguid": "E00D985A0B1540FEB20971B3C3A0CFEA", 00:15:35.725 "nsid": 1, 00:15:35.725 "uuid": "e00d985a-0b15-40fe-b209-71b3c3a0cfea" 00:15:35.725 } 00:15:35.725 ], 00:15:35.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.725 "serial_number": "SPDK00000000000001", 00:15:35.725 "subtype": "NVMe" 00:15:35.725 } 00:15:35.725 ] 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=85535 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:15:35.725 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:15:35.726 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.985 Malloc1 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.985 [ 00:15:35.985 { 00:15:35.985 "allow_any_host": true, 00:15:35.985 "hosts": [], 00:15:35.985 "listen_addresses": [], 00:15:35.985 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.985 "subtype": "Discovery" 00:15:35.985 }, 00:15:35.985 { 00:15:35.985 "allow_any_host": true, 00:15:35.985 "hosts": [], 00:15:35.985 "listen_addresses": [ 00:15:35.985 { 00:15:35.985 "adrfam": "IPv4", 00:15:35.985 "traddr": "10.0.0.2", 00:15:35.985 "trsvcid": "4420", 00:15:35.985 "trtype": "TCP" 00:15:35.985 } 00:15:35.985 ], 00:15:35.985 "max_cntlid": 65519, 00:15:35.985 "max_namespaces": 2, 00:15:35.985 "min_cntlid": 1, 00:15:35.985 "model_number": "SPDK bdev Controller", 00:15:35.985 "namespaces": [ 00:15:35.985 { 00:15:35.985 "bdev_name": "Malloc0", 00:15:35.985 "name": "Malloc0", 00:15:35.985 "nguid": "E00D985A0B1540FEB20971B3C3A0CFEA", 00:15:35.985 "nsid": 1, 00:15:35.985 "uuid": "e00d985a-0b15-40fe-b209-71b3c3a0cfea" 00:15:35.985 }, 00:15:35.985 { 00:15:35.985 "bdev_name": "Malloc1", 00:15:35.985 "name": "Malloc1", 00:15:35.985 "nguid": "122744F61B3F4F6CA17B52ED8627B3AD", 00:15:35.985 "nsid": 2, 00:15:35.985 "uuid": "122744f6-1b3f-4f6c-a17b-52ed8627b3ad" 00:15:35.985 } 00:15:35.985 ], 00:15:35.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.985 "serial_number": "SPDK00000000000001", 00:15:35.985 "subtype": "NVMe" 00:15:35.985 } 00:15:35.985 ] 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 85535 00:15:35.985 Asynchronous Event Request test 00:15:35.985 Attaching to 10.0.0.2 00:15:35.985 Attached to 10.0.0.2 00:15:35.985 Registering asynchronous event callbacks... 00:15:35.985 Starting namespace attribute notice tests for all controllers... 00:15:35.985 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:35.985 aer_cb - Changed Namespace 00:15:35.985 Cleaning up... 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.985 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.270 18:02:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.270 rmmod nvme_tcp 00:15:36.270 rmmod nvme_fabrics 00:15:36.270 rmmod nvme_keyring 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 85482 ']' 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 85482 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 85482 ']' 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 85482 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85482 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85482' 00:15:36.270 killing process with pid 85482 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 85482 00:15:36.270 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 85482 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:36.547 00:15:36.547 real 0m2.498s 00:15:36.547 user 0m6.442s 00:15:36.547 sys 0m0.899s 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:36.547 ************************************ 00:15:36.547 END TEST nvmf_aer 00:15:36.547 ************************************ 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.547 ************************************ 00:15:36.547 START TEST nvmf_async_init 00:15:36.547 ************************************ 00:15:36.547 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:15:36.811 * Looking for test storage... 00:15:36.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.811 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1fa23e587ea84fa1a2439c1bc9299784 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:36.812 Cannot find device "nvmf_tgt_br" 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.812 Cannot find device "nvmf_tgt_br2" 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:36.812 Cannot find device "nvmf_tgt_br" 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:36.812 Cannot find device "nvmf_tgt_br2" 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.812 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:37.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:37.071 00:15:37.071 --- 10.0.0.2 ping statistics --- 00:15:37.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.071 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:37.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:37.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:37.071 00:15:37.071 --- 10.0.0.3 ping statistics --- 00:15:37.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.071 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:37.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:37.071 00:15:37.071 --- 10.0.0.1 ping statistics --- 00:15:37.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.071 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=85712 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 85712 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 85712 ']' 00:15:37.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.071 18:02:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.071 [2024-07-24 18:02:43.999674] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:37.071 [2024-07-24 18:02:43.999790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.330 [2024-07-24 18:02:44.134575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.330 [2024-07-24 18:02:44.238471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.330 [2024-07-24 18:02:44.238521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.330 [2024-07-24 18:02:44.238532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.330 [2024-07-24 18:02:44.238541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.330 [2024-07-24 18:02:44.238549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.330 [2024-07-24 18:02:44.238597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 [2024-07-24 18:02:44.411024] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 null0 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1fa23e587ea84fa1a2439c1bc9299784 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.588 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 [2024-07-24 18:02:44.455208] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.589 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.589 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:15:37.589 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.589 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.848 nvme0n1 00:15:37.848 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.848 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:37.848 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.848 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.848 [ 00:15:37.848 { 00:15:37.848 "aliases": [ 00:15:37.848 "1fa23e58-7ea8-4fa1-a243-9c1bc9299784" 00:15:37.848 ], 00:15:37.848 "assigned_rate_limits": { 00:15:37.848 "r_mbytes_per_sec": 0, 00:15:37.848 "rw_ios_per_sec": 0, 00:15:37.848 "rw_mbytes_per_sec": 0, 00:15:37.848 "w_mbytes_per_sec": 0 00:15:37.848 }, 00:15:37.848 "block_size": 512, 00:15:37.848 "claimed": false, 00:15:37.848 "driver_specific": { 00:15:37.848 "mp_policy": "active_passive", 00:15:37.848 "nvme": [ 00:15:37.848 { 00:15:37.848 "ctrlr_data": { 00:15:37.848 "ana_reporting": false, 00:15:37.848 "cntlid": 1, 00:15:37.848 "firmware_revision": "24.09", 00:15:37.848 "model_number": "SPDK bdev Controller", 00:15:37.848 "multi_ctrlr": true, 00:15:37.848 "oacs": { 00:15:37.848 "firmware": 0, 00:15:37.848 "format": 0, 00:15:37.848 "ns_manage": 0, 00:15:37.848 "security": 0 00:15:37.848 }, 00:15:37.848 "serial_number": "00000000000000000000", 00:15:37.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:37.848 "vendor_id": "0x8086" 00:15:37.848 }, 00:15:37.848 "ns_data": { 00:15:37.848 "can_share": true, 00:15:37.848 "id": 1 00:15:37.848 }, 00:15:37.848 "trid": { 00:15:37.848 "adrfam": "IPv4", 00:15:37.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:37.848 "traddr": "10.0.0.2", 00:15:37.848 "trsvcid": "4420", 00:15:37.848 "trtype": "TCP" 00:15:37.848 }, 00:15:37.848 "vs": { 00:15:37.848 "nvme_version": "1.3" 00:15:37.848 } 00:15:37.848 } 00:15:37.848 ] 00:15:37.848 }, 00:15:37.848 "memory_domains": [ 00:15:37.848 { 00:15:37.848 "dma_device_id": "system", 00:15:37.848 "dma_device_type": 1 00:15:37.848 } 00:15:37.848 ], 00:15:37.848 "name": "nvme0n1", 00:15:37.848 "num_blocks": 2097152, 00:15:37.848 "product_name": "NVMe disk", 00:15:37.848 "supported_io_types": { 00:15:37.848 "abort": true, 00:15:37.848 "compare": true, 00:15:37.848 "compare_and_write": true, 00:15:37.848 "copy": true, 00:15:37.848 "flush": true, 00:15:37.848 "get_zone_info": false, 00:15:37.848 "nvme_admin": true, 00:15:37.848 "nvme_io": true, 00:15:37.848 "nvme_io_md": false, 00:15:37.848 "nvme_iov_md": false, 00:15:37.848 "read": true, 00:15:37.848 "reset": true, 00:15:37.848 "seek_data": false, 00:15:37.848 "seek_hole": false, 00:15:37.848 "unmap": false, 00:15:37.848 "write": true, 00:15:37.848 "write_zeroes": true, 00:15:37.848 "zcopy": false, 00:15:37.848 "zone_append": false, 00:15:37.848 "zone_management": false 00:15:37.848 }, 00:15:37.848 "uuid": "1fa23e58-7ea8-4fa1-a243-9c1bc9299784", 00:15:37.848 "zoned": false 00:15:37.848 } 00:15:37.848 ] 00:15:37.848 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.848 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:15:37.848 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.848 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:37.848 [2024-07-24 18:02:44.743091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:37.848 [2024-07-24 18:02:44.743442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187cb00 (9): Bad file descriptor 00:15:38.106 [2024-07-24 18:02:44.885423] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.106 [ 00:15:38.106 { 00:15:38.106 "aliases": [ 00:15:38.106 "1fa23e58-7ea8-4fa1-a243-9c1bc9299784" 00:15:38.106 ], 00:15:38.106 "assigned_rate_limits": { 00:15:38.106 "r_mbytes_per_sec": 0, 00:15:38.106 "rw_ios_per_sec": 0, 00:15:38.106 "rw_mbytes_per_sec": 0, 00:15:38.106 "w_mbytes_per_sec": 0 00:15:38.106 }, 00:15:38.106 "block_size": 512, 00:15:38.106 "claimed": false, 00:15:38.106 "driver_specific": { 00:15:38.106 "mp_policy": "active_passive", 00:15:38.106 "nvme": [ 00:15:38.106 { 00:15:38.106 "ctrlr_data": { 00:15:38.106 "ana_reporting": false, 00:15:38.106 "cntlid": 2, 00:15:38.106 "firmware_revision": "24.09", 00:15:38.106 "model_number": "SPDK bdev Controller", 00:15:38.106 "multi_ctrlr": true, 00:15:38.106 "oacs": { 00:15:38.106 "firmware": 0, 00:15:38.106 "format": 0, 00:15:38.106 "ns_manage": 0, 00:15:38.106 "security": 0 00:15:38.106 }, 00:15:38.106 "serial_number": "00000000000000000000", 00:15:38.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:38.106 "vendor_id": "0x8086" 00:15:38.106 }, 00:15:38.106 "ns_data": { 00:15:38.106 "can_share": true, 00:15:38.106 "id": 1 00:15:38.106 }, 00:15:38.106 "trid": { 00:15:38.106 "adrfam": "IPv4", 00:15:38.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:38.106 "traddr": "10.0.0.2", 00:15:38.106 "trsvcid": "4420", 00:15:38.106 "trtype": "TCP" 00:15:38.106 }, 00:15:38.106 "vs": { 00:15:38.106 "nvme_version": "1.3" 00:15:38.106 } 00:15:38.106 } 00:15:38.106 ] 00:15:38.106 }, 00:15:38.106 "memory_domains": [ 00:15:38.106 { 00:15:38.106 "dma_device_id": "system", 00:15:38.106 "dma_device_type": 1 00:15:38.106 } 00:15:38.106 ], 00:15:38.106 "name": "nvme0n1", 00:15:38.106 "num_blocks": 2097152, 00:15:38.106 "product_name": "NVMe disk", 00:15:38.106 "supported_io_types": { 00:15:38.106 "abort": true, 00:15:38.106 "compare": true, 00:15:38.106 "compare_and_write": true, 00:15:38.106 "copy": true, 00:15:38.106 "flush": true, 00:15:38.106 "get_zone_info": false, 00:15:38.106 "nvme_admin": true, 00:15:38.106 "nvme_io": true, 00:15:38.106 "nvme_io_md": false, 00:15:38.106 "nvme_iov_md": false, 00:15:38.106 "read": true, 00:15:38.106 "reset": true, 00:15:38.106 "seek_data": false, 00:15:38.106 "seek_hole": false, 00:15:38.106 "unmap": false, 00:15:38.106 "write": true, 00:15:38.106 "write_zeroes": true, 00:15:38.106 "zcopy": false, 00:15:38.106 "zone_append": false, 00:15:38.106 "zone_management": false 00:15:38.106 }, 00:15:38.106 "uuid": "1fa23e58-7ea8-4fa1-a243-9c1bc9299784", 00:15:38.106 "zoned": false 00:15:38.106 } 00:15:38.106 ] 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.RChlafkxkh 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.RChlafkxkh 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.106 [2024-07-24 18:02:44.975267] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:38.106 [2024-07-24 18:02:44.975472] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RChlafkxkh 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.106 [2024-07-24 18:02:44.983285] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RChlafkxkh 00:15:38.106 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.107 18:02:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.107 [2024-07-24 18:02:44.991313] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.107 [2024-07-24 18:02:44.991409] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:38.107 nvme0n1 00:15:38.107 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.107 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:38.107 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.107 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.107 [ 00:15:38.107 { 00:15:38.107 "aliases": [ 00:15:38.107 "1fa23e58-7ea8-4fa1-a243-9c1bc9299784" 00:15:38.107 ], 00:15:38.107 "assigned_rate_limits": { 00:15:38.107 "r_mbytes_per_sec": 0, 00:15:38.107 "rw_ios_per_sec": 0, 00:15:38.107 "rw_mbytes_per_sec": 0, 00:15:38.107 "w_mbytes_per_sec": 0 00:15:38.107 }, 00:15:38.107 "block_size": 512, 00:15:38.107 "claimed": false, 00:15:38.107 "driver_specific": { 00:15:38.107 "mp_policy": "active_passive", 00:15:38.107 "nvme": [ 00:15:38.107 { 00:15:38.107 "ctrlr_data": { 00:15:38.107 "ana_reporting": false, 00:15:38.107 "cntlid": 3, 00:15:38.107 "firmware_revision": "24.09", 00:15:38.107 "model_number": "SPDK bdev Controller", 00:15:38.107 "multi_ctrlr": true, 00:15:38.107 "oacs": { 00:15:38.107 "firmware": 0, 00:15:38.107 "format": 0, 00:15:38.107 "ns_manage": 0, 00:15:38.107 "security": 0 00:15:38.107 }, 00:15:38.107 "serial_number": "00000000000000000000", 00:15:38.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:38.107 "vendor_id": "0x8086" 00:15:38.107 }, 00:15:38.107 "ns_data": { 00:15:38.107 "can_share": true, 00:15:38.107 "id": 1 00:15:38.107 }, 00:15:38.107 "trid": { 00:15:38.107 "adrfam": "IPv4", 00:15:38.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:38.400 "traddr": "10.0.0.2", 00:15:38.400 "trsvcid": "4421", 00:15:38.400 "trtype": "TCP" 00:15:38.400 }, 00:15:38.400 "vs": { 00:15:38.400 "nvme_version": "1.3" 00:15:38.400 } 00:15:38.400 } 00:15:38.400 ] 00:15:38.400 }, 00:15:38.400 "memory_domains": [ 00:15:38.400 { 00:15:38.400 "dma_device_id": "system", 00:15:38.400 "dma_device_type": 1 00:15:38.400 } 00:15:38.400 ], 00:15:38.400 "name": "nvme0n1", 00:15:38.400 "num_blocks": 2097152, 00:15:38.400 "product_name": "NVMe disk", 00:15:38.400 "supported_io_types": { 00:15:38.400 "abort": true, 00:15:38.400 "compare": true, 00:15:38.400 "compare_and_write": true, 00:15:38.400 "copy": true, 00:15:38.400 "flush": true, 00:15:38.400 "get_zone_info": false, 00:15:38.400 "nvme_admin": true, 00:15:38.400 "nvme_io": true, 00:15:38.400 "nvme_io_md": false, 00:15:38.400 "nvme_iov_md": false, 00:15:38.400 "read": true, 00:15:38.400 "reset": true, 00:15:38.400 "seek_data": false, 00:15:38.400 "seek_hole": false, 00:15:38.400 "unmap": false, 00:15:38.400 "write": true, 00:15:38.400 "write_zeroes": true, 00:15:38.400 "zcopy": false, 00:15:38.400 "zone_append": false, 00:15:38.400 "zone_management": false 00:15:38.400 }, 00:15:38.400 "uuid": "1fa23e58-7ea8-4fa1-a243-9c1bc9299784", 00:15:38.400 "zoned": false 00:15:38.400 } 00:15:38.400 ] 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.RChlafkxkh 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.400 rmmod nvme_tcp 00:15:38.400 rmmod nvme_fabrics 00:15:38.400 rmmod nvme_keyring 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 85712 ']' 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 85712 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 85712 ']' 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 85712 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85712 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85712' 00:15:38.400 killing process with pid 85712 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 85712 00:15:38.400 [2024-07-24 18:02:45.244370] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:38.400 [2024-07-24 18:02:45.244416] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:38.400 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 85712 00:15:38.660 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.660 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.660 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.660 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.660 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.660 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.660 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:38.661 ************************************ 00:15:38.661 END TEST nvmf_async_init 00:15:38.661 ************************************ 00:15:38.661 00:15:38.661 real 0m2.038s 00:15:38.661 user 0m1.661s 00:15:38.661 sys 0m0.675s 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.661 ************************************ 00:15:38.661 START TEST dma 00:15:38.661 ************************************ 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:15:38.661 * Looking for test storage... 00:15:38.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.661 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.920 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.920 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:38.920 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:38.920 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.920 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.920 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:15:38.921 00:15:38.921 real 0m0.111s 00:15:38.921 user 0m0.047s 00:15:38.921 sys 0m0.071s 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:15:38.921 ************************************ 00:15:38.921 END TEST dma 00:15:38.921 ************************************ 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.921 ************************************ 00:15:38.921 START TEST nvmf_identify 00:15:38.921 ************************************ 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:38.921 * Looking for test storage... 00:15:38.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:38.921 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:38.922 Cannot find device "nvmf_tgt_br" 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.922 Cannot find device "nvmf_tgt_br2" 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:38.922 Cannot find device "nvmf_tgt_br" 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:38.922 Cannot find device "nvmf_tgt_br2" 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:38.922 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.181 18:02:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:39.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:15:39.181 00:15:39.181 --- 10.0.0.2 ping statistics --- 00:15:39.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.181 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:39.181 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.181 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:39.181 00:15:39.181 --- 10.0.0.3 ping statistics --- 00:15:39.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.181 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:39.181 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:15:39.440 00:15:39.440 --- 10.0.0.1 ping statistics --- 00:15:39.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.440 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=85965 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 85965 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 85965 ']' 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.440 18:02:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.440 [2024-07-24 18:02:46.235643] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:39.440 [2024-07-24 18:02:46.235744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.441 [2024-07-24 18:02:46.373787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.699 [2024-07-24 18:02:46.482827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.699 [2024-07-24 18:02:46.482884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.700 [2024-07-24 18:02:46.482895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.700 [2024-07-24 18:02:46.482905] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.700 [2024-07-24 18:02:46.482913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.700 [2024-07-24 18:02:46.483104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.700 [2024-07-24 18:02:46.483746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.700 [2024-07-24 18:02:46.483804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.700 [2024-07-24 18:02:46.483814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 [2024-07-24 18:02:47.285770] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 Malloc0 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 [2024-07-24 18:02:47.398072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.637 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:40.637 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.637 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.637 [ 00:15:40.637 { 00:15:40.637 "allow_any_host": true, 00:15:40.637 "hosts": [], 00:15:40.637 "listen_addresses": [ 00:15:40.637 { 00:15:40.637 "adrfam": "IPv4", 00:15:40.637 "traddr": "10.0.0.2", 00:15:40.637 "trsvcid": "4420", 00:15:40.637 "trtype": "TCP" 00:15:40.637 } 00:15:40.637 ], 00:15:40.637 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.637 "subtype": "Discovery" 00:15:40.637 }, 00:15:40.637 { 00:15:40.637 "allow_any_host": true, 00:15:40.637 "hosts": [], 00:15:40.637 "listen_addresses": [ 00:15:40.637 { 00:15:40.637 "adrfam": "IPv4", 00:15:40.637 "traddr": "10.0.0.2", 00:15:40.637 "trsvcid": "4420", 00:15:40.637 "trtype": "TCP" 00:15:40.637 } 00:15:40.637 ], 00:15:40.637 "max_cntlid": 65519, 00:15:40.637 "max_namespaces": 32, 00:15:40.637 "min_cntlid": 1, 00:15:40.637 "model_number": "SPDK bdev Controller", 00:15:40.637 "namespaces": [ 00:15:40.637 { 00:15:40.637 "bdev_name": "Malloc0", 00:15:40.637 "eui64": "ABCDEF0123456789", 00:15:40.637 "name": "Malloc0", 00:15:40.637 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:40.637 "nsid": 1, 00:15:40.637 "uuid": "e274b2a6-2ad9-4a18-b342-6541fe143a6e" 00:15:40.637 } 00:15:40.637 ], 00:15:40.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.637 "serial_number": "SPDK00000000000001", 00:15:40.637 "subtype": "NVMe" 00:15:40.637 } 00:15:40.637 ] 00:15:40.637 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.637 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:40.637 [2024-07-24 18:02:47.454534] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:40.637 [2024-07-24 18:02:47.454594] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86018 ] 00:15:40.637 [2024-07-24 18:02:47.599604] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:40.637 [2024-07-24 18:02:47.599727] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:40.637 [2024-07-24 18:02:47.599738] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:40.637 [2024-07-24 18:02:47.599758] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:40.637 [2024-07-24 18:02:47.599774] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:40.637 [2024-07-24 18:02:47.599969] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:40.637 [2024-07-24 18:02:47.600028] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe67a60 0 00:15:40.637 [2024-07-24 18:02:47.607293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:40.637 [2024-07-24 18:02:47.607373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:40.637 [2024-07-24 18:02:47.607382] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:40.637 [2024-07-24 18:02:47.607390] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:40.637 [2024-07-24 18:02:47.607479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.637 [2024-07-24 18:02:47.607490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.637 [2024-07-24 18:02:47.607498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.637 [2024-07-24 18:02:47.607521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:40.637 [2024-07-24 18:02:47.607584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.900 [2024-07-24 18:02:47.615289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.900 [2024-07-24 18:02:47.615354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.900 [2024-07-24 18:02:47.615366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.900 [2024-07-24 18:02:47.615407] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:40.900 [2024-07-24 18:02:47.615423] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:40.900 [2024-07-24 18:02:47.615434] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:40.900 [2024-07-24 18:02:47.615469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.900 [2024-07-24 18:02:47.615504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.900 [2024-07-24 18:02:47.615560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.900 [2024-07-24 18:02:47.615653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.900 [2024-07-24 18:02:47.615663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.900 [2024-07-24 18:02:47.615669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.900 [2024-07-24 18:02:47.615686] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:40.900 [2024-07-24 18:02:47.615697] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:40.900 [2024-07-24 18:02:47.615709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615715] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.900 [2024-07-24 18:02:47.615732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.900 [2024-07-24 18:02:47.615758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.900 [2024-07-24 18:02:47.615812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.900 [2024-07-24 18:02:47.615822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.900 [2024-07-24 18:02:47.615828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.900 [2024-07-24 18:02:47.615844] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:40.900 [2024-07-24 18:02:47.615857] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:40.900 [2024-07-24 18:02:47.615867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.900 [2024-07-24 18:02:47.615890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.900 [2024-07-24 18:02:47.615913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.900 [2024-07-24 18:02:47.615963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.900 [2024-07-24 18:02:47.615972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.900 [2024-07-24 18:02:47.615978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.615985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.900 [2024-07-24 18:02:47.615993] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:40.900 [2024-07-24 18:02:47.616006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616019] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.900 [2024-07-24 18:02:47.616029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.900 [2024-07-24 18:02:47.616050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.900 [2024-07-24 18:02:47.616096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.900 [2024-07-24 18:02:47.616105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.900 [2024-07-24 18:02:47.616111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.900 [2024-07-24 18:02:47.616125] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:40.900 [2024-07-24 18:02:47.616133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:40.900 [2024-07-24 18:02:47.616145] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:40.900 [2024-07-24 18:02:47.616262] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:40.900 [2024-07-24 18:02:47.616285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:40.900 [2024-07-24 18:02:47.616301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.900 [2024-07-24 18:02:47.616326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.900 [2024-07-24 18:02:47.616356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.900 [2024-07-24 18:02:47.616414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.900 [2024-07-24 18:02:47.616431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.900 [2024-07-24 18:02:47.616438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.900 [2024-07-24 18:02:47.616452] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:40.900 [2024-07-24 18:02:47.616466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.900 [2024-07-24 18:02:47.616490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.900 [2024-07-24 18:02:47.616516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.900 [2024-07-24 18:02:47.616566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.900 [2024-07-24 18:02:47.616575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.900 [2024-07-24 18:02:47.616581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.900 [2024-07-24 18:02:47.616587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.900 [2024-07-24 18:02:47.616595] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:40.900 [2024-07-24 18:02:47.616604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:40.900 [2024-07-24 18:02:47.616616] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:40.901 [2024-07-24 18:02:47.616635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:40.901 [2024-07-24 18:02:47.616653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.616659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.616669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.901 [2024-07-24 18:02:47.616693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.901 [2024-07-24 18:02:47.616785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.901 [2024-07-24 18:02:47.616804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.901 [2024-07-24 18:02:47.616812] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.616818] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67a60): datao=0, datal=4096, cccid=0 00:15:40.901 [2024-07-24 18:02:47.616826] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeaa840) on tqpair(0xe67a60): expected_datao=0, payload_size=4096 00:15:40.901 [2024-07-24 18:02:47.616835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.616847] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.616855] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.616868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.901 [2024-07-24 18:02:47.616877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.901 [2024-07-24 18:02:47.616883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.616890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.901 [2024-07-24 18:02:47.616904] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:40.901 [2024-07-24 18:02:47.616913] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:40.901 [2024-07-24 18:02:47.616921] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:40.901 [2024-07-24 18:02:47.616937] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:40.901 [2024-07-24 18:02:47.616945] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:40.901 [2024-07-24 18:02:47.616954] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:40.901 [2024-07-24 18:02:47.616968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:40.901 [2024-07-24 18:02:47.616980] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.616987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.616994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.901 [2024-07-24 18:02:47.617036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.901 [2024-07-24 18:02:47.617098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.901 [2024-07-24 18:02:47.617113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.901 [2024-07-24 18:02:47.617119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.901 [2024-07-24 18:02:47.617137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.901 [2024-07-24 18:02:47.617170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.901 [2024-07-24 18:02:47.617202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.901 [2024-07-24 18:02:47.617234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.901 [2024-07-24 18:02:47.617283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:40.901 [2024-07-24 18:02:47.617296] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:40.901 [2024-07-24 18:02:47.617309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617316] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.901 [2024-07-24 18:02:47.617365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa840, cid 0, qid 0 00:15:40.901 [2024-07-24 18:02:47.617374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaa9c0, cid 1, qid 0 00:15:40.901 [2024-07-24 18:02:47.617382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaab40, cid 2, qid 0 00:15:40.901 [2024-07-24 18:02:47.617389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.901 [2024-07-24 18:02:47.617397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaae40, cid 4, qid 0 00:15:40.901 [2024-07-24 18:02:47.617478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.901 [2024-07-24 18:02:47.617493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.901 [2024-07-24 18:02:47.617500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaae40) on tqpair=0xe67a60 00:15:40.901 [2024-07-24 18:02:47.617516] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:40.901 [2024-07-24 18:02:47.617525] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:40.901 [2024-07-24 18:02:47.617542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.901 [2024-07-24 18:02:47.617586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaae40, cid 4, qid 0 00:15:40.901 [2024-07-24 18:02:47.617644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.901 [2024-07-24 18:02:47.617653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.901 [2024-07-24 18:02:47.617660] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617666] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67a60): datao=0, datal=4096, cccid=4 00:15:40.901 [2024-07-24 18:02:47.617674] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeaae40) on tqpair(0xe67a60): expected_datao=0, payload_size=4096 00:15:40.901 [2024-07-24 18:02:47.617683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617694] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617700] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.901 [2024-07-24 18:02:47.617721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.901 [2024-07-24 18:02:47.617728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaae40) on tqpair=0xe67a60 00:15:40.901 [2024-07-24 18:02:47.617755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:40.901 [2024-07-24 18:02:47.617803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.901 [2024-07-24 18:02:47.617833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.617847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe67a60) 00:15:40.901 [2024-07-24 18:02:47.617856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.901 [2024-07-24 18:02:47.617891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaae40, cid 4, qid 0 00:15:40.901 [2024-07-24 18:02:47.617899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaafc0, cid 5, qid 0 00:15:40.901 [2024-07-24 18:02:47.617997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.901 [2024-07-24 18:02:47.618012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.901 [2024-07-24 18:02:47.618018] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.618024] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67a60): datao=0, datal=1024, cccid=4 00:15:40.901 [2024-07-24 18:02:47.618032] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeaae40) on tqpair(0xe67a60): expected_datao=0, payload_size=1024 00:15:40.901 [2024-07-24 18:02:47.618040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.901 [2024-07-24 18:02:47.618049] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.618055] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.618063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.902 [2024-07-24 18:02:47.618071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.902 [2024-07-24 18:02:47.618078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.618084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaafc0) on tqpair=0xe67a60 00:15:40.902 [2024-07-24 18:02:47.658384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.902 [2024-07-24 18:02:47.658444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.902 [2024-07-24 18:02:47.658453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaae40) on tqpair=0xe67a60 00:15:40.902 [2024-07-24 18:02:47.658500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67a60) 00:15:40.902 [2024-07-24 18:02:47.658529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.902 [2024-07-24 18:02:47.658583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaae40, cid 4, qid 0 00:15:40.902 [2024-07-24 18:02:47.658704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.902 [2024-07-24 18:02:47.658714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.902 [2024-07-24 18:02:47.658720] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658726] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67a60): datao=0, datal=3072, cccid=4 00:15:40.902 [2024-07-24 18:02:47.658735] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeaae40) on tqpair(0xe67a60): expected_datao=0, payload_size=3072 00:15:40.902 [2024-07-24 18:02:47.658743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658756] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658763] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.902 [2024-07-24 18:02:47.658783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.902 [2024-07-24 18:02:47.658790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaae40) on tqpair=0xe67a60 00:15:40.902 [2024-07-24 18:02:47.658812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe67a60) 00:15:40.902 [2024-07-24 18:02:47.658829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.902 [2024-07-24 18:02:47.658864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaae40, cid 4, qid 0 00:15:40.902 [2024-07-24 18:02:47.658921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.902 [2024-07-24 18:02:47.658932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.902 [2024-07-24 18:02:47.658938] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658945] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe67a60): datao=0, datal=8, cccid=4 00:15:40.902 [2024-07-24 18:02:47.658953] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeaae40) on tqpair(0xe67a60): expected_datao=0, payload_size=8 00:15:40.902 [2024-07-24 18:02:47.658961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658971] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.658977] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.702364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.902 [2024-07-24 18:02:47.702421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.902 [2024-07-24 18:02:47.702430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.902 [2024-07-24 18:02:47.702438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaae40) on tqpair=0xe67a60 00:15:40.902 ===================================================== 00:15:40.902 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:40.902 ===================================================== 00:15:40.902 Controller Capabilities/Features 00:15:40.902 ================================ 00:15:40.902 Vendor ID: 0000 00:15:40.902 Subsystem Vendor ID: 0000 00:15:40.902 Serial Number: .................... 00:15:40.902 Model Number: ........................................ 00:15:40.902 Firmware Version: 24.09 00:15:40.902 Recommended Arb Burst: 0 00:15:40.902 IEEE OUI Identifier: 00 00 00 00:15:40.902 Multi-path I/O 00:15:40.902 May have multiple subsystem ports: No 00:15:40.902 May have multiple controllers: No 00:15:40.902 Associated with SR-IOV VF: No 00:15:40.902 Max Data Transfer Size: 131072 00:15:40.902 Max Number of Namespaces: 0 00:15:40.902 Max Number of I/O Queues: 1024 00:15:40.902 NVMe Specification Version (VS): 1.3 00:15:40.902 NVMe Specification Version (Identify): 1.3 00:15:40.902 Maximum Queue Entries: 128 00:15:40.902 Contiguous Queues Required: Yes 00:15:40.902 Arbitration Mechanisms Supported 00:15:40.902 Weighted Round Robin: Not Supported 00:15:40.902 Vendor Specific: Not Supported 00:15:40.902 Reset Timeout: 15000 ms 00:15:40.902 Doorbell Stride: 4 bytes 00:15:40.902 NVM Subsystem Reset: Not Supported 00:15:40.902 Command Sets Supported 00:15:40.902 NVM Command Set: Supported 00:15:40.902 Boot Partition: Not Supported 00:15:40.902 Memory Page Size Minimum: 4096 bytes 00:15:40.902 Memory Page Size Maximum: 4096 bytes 00:15:40.902 Persistent Memory Region: Not Supported 00:15:40.902 Optional Asynchronous Events Supported 00:15:40.902 Namespace Attribute Notices: Not Supported 00:15:40.902 Firmware Activation Notices: Not Supported 00:15:40.902 ANA Change Notices: Not Supported 00:15:40.902 PLE Aggregate Log Change Notices: Not Supported 00:15:40.902 LBA Status Info Alert Notices: Not Supported 00:15:40.902 EGE Aggregate Log Change Notices: Not Supported 00:15:40.902 Normal NVM Subsystem Shutdown event: Not Supported 00:15:40.902 Zone Descriptor Change Notices: Not Supported 00:15:40.902 Discovery Log Change Notices: Supported 00:15:40.902 Controller Attributes 00:15:40.902 128-bit Host Identifier: Not Supported 00:15:40.902 Non-Operational Permissive Mode: Not Supported 00:15:40.902 NVM Sets: Not Supported 00:15:40.902 Read Recovery Levels: Not Supported 00:15:40.902 Endurance Groups: Not Supported 00:15:40.902 Predictable Latency Mode: Not Supported 00:15:40.902 Traffic Based Keep ALive: Not Supported 00:15:40.902 Namespace Granularity: Not Supported 00:15:40.902 SQ Associations: Not Supported 00:15:40.902 UUID List: Not Supported 00:15:40.902 Multi-Domain Subsystem: Not Supported 00:15:40.902 Fixed Capacity Management: Not Supported 00:15:40.902 Variable Capacity Management: Not Supported 00:15:40.902 Delete Endurance Group: Not Supported 00:15:40.902 Delete NVM Set: Not Supported 00:15:40.902 Extended LBA Formats Supported: Not Supported 00:15:40.902 Flexible Data Placement Supported: Not Supported 00:15:40.902 00:15:40.902 Controller Memory Buffer Support 00:15:40.902 ================================ 00:15:40.902 Supported: No 00:15:40.902 00:15:40.902 Persistent Memory Region Support 00:15:40.902 ================================ 00:15:40.902 Supported: No 00:15:40.902 00:15:40.902 Admin Command Set Attributes 00:15:40.902 ============================ 00:15:40.902 Security Send/Receive: Not Supported 00:15:40.902 Format NVM: Not Supported 00:15:40.902 Firmware Activate/Download: Not Supported 00:15:40.902 Namespace Management: Not Supported 00:15:40.902 Device Self-Test: Not Supported 00:15:40.902 Directives: Not Supported 00:15:40.902 NVMe-MI: Not Supported 00:15:40.902 Virtualization Management: Not Supported 00:15:40.902 Doorbell Buffer Config: Not Supported 00:15:40.902 Get LBA Status Capability: Not Supported 00:15:40.902 Command & Feature Lockdown Capability: Not Supported 00:15:40.902 Abort Command Limit: 1 00:15:40.902 Async Event Request Limit: 4 00:15:40.902 Number of Firmware Slots: N/A 00:15:40.902 Firmware Slot 1 Read-Only: N/A 00:15:40.902 Firmware Activation Without Reset: N/A 00:15:40.902 Multiple Update Detection Support: N/A 00:15:40.902 Firmware Update Granularity: No Information Provided 00:15:40.902 Per-Namespace SMART Log: No 00:15:40.902 Asymmetric Namespace Access Log Page: Not Supported 00:15:40.902 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:40.902 Command Effects Log Page: Not Supported 00:15:40.902 Get Log Page Extended Data: Supported 00:15:40.902 Telemetry Log Pages: Not Supported 00:15:40.902 Persistent Event Log Pages: Not Supported 00:15:40.902 Supported Log Pages Log Page: May Support 00:15:40.902 Commands Supported & Effects Log Page: Not Supported 00:15:40.902 Feature Identifiers & Effects Log Page:May Support 00:15:40.902 NVMe-MI Commands & Effects Log Page: May Support 00:15:40.902 Data Area 4 for Telemetry Log: Not Supported 00:15:40.902 Error Log Page Entries Supported: 128 00:15:40.902 Keep Alive: Not Supported 00:15:40.902 00:15:40.902 NVM Command Set Attributes 00:15:40.902 ========================== 00:15:40.902 Submission Queue Entry Size 00:15:40.902 Max: 1 00:15:40.902 Min: 1 00:15:40.902 Completion Queue Entry Size 00:15:40.902 Max: 1 00:15:40.903 Min: 1 00:15:40.903 Number of Namespaces: 0 00:15:40.903 Compare Command: Not Supported 00:15:40.903 Write Uncorrectable Command: Not Supported 00:15:40.903 Dataset Management Command: Not Supported 00:15:40.903 Write Zeroes Command: Not Supported 00:15:40.903 Set Features Save Field: Not Supported 00:15:40.903 Reservations: Not Supported 00:15:40.903 Timestamp: Not Supported 00:15:40.903 Copy: Not Supported 00:15:40.903 Volatile Write Cache: Not Present 00:15:40.903 Atomic Write Unit (Normal): 1 00:15:40.903 Atomic Write Unit (PFail): 1 00:15:40.903 Atomic Compare & Write Unit: 1 00:15:40.903 Fused Compare & Write: Supported 00:15:40.903 Scatter-Gather List 00:15:40.903 SGL Command Set: Supported 00:15:40.903 SGL Keyed: Supported 00:15:40.903 SGL Bit Bucket Descriptor: Not Supported 00:15:40.903 SGL Metadata Pointer: Not Supported 00:15:40.903 Oversized SGL: Not Supported 00:15:40.903 SGL Metadata Address: Not Supported 00:15:40.903 SGL Offset: Supported 00:15:40.903 Transport SGL Data Block: Not Supported 00:15:40.903 Replay Protected Memory Block: Not Supported 00:15:40.903 00:15:40.903 Firmware Slot Information 00:15:40.903 ========================= 00:15:40.903 Active slot: 0 00:15:40.903 00:15:40.903 00:15:40.903 Error Log 00:15:40.903 ========= 00:15:40.903 00:15:40.903 Active Namespaces 00:15:40.903 ================= 00:15:40.903 Discovery Log Page 00:15:40.903 ================== 00:15:40.903 Generation Counter: 2 00:15:40.903 Number of Records: 2 00:15:40.903 Record Format: 0 00:15:40.903 00:15:40.903 Discovery Log Entry 0 00:15:40.903 ---------------------- 00:15:40.903 Transport Type: 3 (TCP) 00:15:40.903 Address Family: 1 (IPv4) 00:15:40.903 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:40.903 Entry Flags: 00:15:40.903 Duplicate Returned Information: 1 00:15:40.903 Explicit Persistent Connection Support for Discovery: 1 00:15:40.903 Transport Requirements: 00:15:40.903 Secure Channel: Not Required 00:15:40.903 Port ID: 0 (0x0000) 00:15:40.903 Controller ID: 65535 (0xffff) 00:15:40.903 Admin Max SQ Size: 128 00:15:40.903 Transport Service Identifier: 4420 00:15:40.903 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:40.903 Transport Address: 10.0.0.2 00:15:40.903 Discovery Log Entry 1 00:15:40.903 ---------------------- 00:15:40.903 Transport Type: 3 (TCP) 00:15:40.903 Address Family: 1 (IPv4) 00:15:40.903 Subsystem Type: 2 (NVM Subsystem) 00:15:40.903 Entry Flags: 00:15:40.903 Duplicate Returned Information: 0 00:15:40.903 Explicit Persistent Connection Support for Discovery: 0 00:15:40.903 Transport Requirements: 00:15:40.903 Secure Channel: Not Required 00:15:40.903 Port ID: 0 (0x0000) 00:15:40.903 Controller ID: 65535 (0xffff) 00:15:40.903 Admin Max SQ Size: 128 00:15:40.903 Transport Service Identifier: 4420 00:15:40.903 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:40.903 Transport Address: 10.0.0.2 [2024-07-24 18:02:47.702654] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:40.903 [2024-07-24 18:02:47.702678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa840) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.702690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.903 [2024-07-24 18:02:47.702700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaa9c0) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.702708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.903 [2024-07-24 18:02:47.702716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaab40) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.702723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.903 [2024-07-24 18:02:47.702731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.702739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.903 [2024-07-24 18:02:47.702757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.702764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.702771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.903 [2024-07-24 18:02:47.702787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.903 [2024-07-24 18:02:47.702830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.903 [2024-07-24 18:02:47.702917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.903 [2024-07-24 18:02:47.702928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.903 [2024-07-24 18:02:47.702934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.702941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.702958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.702965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.702971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.903 [2024-07-24 18:02:47.702983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.903 [2024-07-24 18:02:47.703016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.903 [2024-07-24 18:02:47.703088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.903 [2024-07-24 18:02:47.703097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.903 [2024-07-24 18:02:47.703103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.703118] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:40.903 [2024-07-24 18:02:47.703126] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:40.903 [2024-07-24 18:02:47.703140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.903 [2024-07-24 18:02:47.703162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.903 [2024-07-24 18:02:47.703185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.903 [2024-07-24 18:02:47.703235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.903 [2024-07-24 18:02:47.703263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.903 [2024-07-24 18:02:47.703270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.703292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.903 [2024-07-24 18:02:47.703316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.903 [2024-07-24 18:02:47.703377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.903 [2024-07-24 18:02:47.703429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.903 [2024-07-24 18:02:47.703438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.903 [2024-07-24 18:02:47.703445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.703465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.903 [2024-07-24 18:02:47.703488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.903 [2024-07-24 18:02:47.703511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.903 [2024-07-24 18:02:47.703561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.903 [2024-07-24 18:02:47.703571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.903 [2024-07-24 18:02:47.703576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.903 [2024-07-24 18:02:47.703596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.903 [2024-07-24 18:02:47.703609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.903 [2024-07-24 18:02:47.703619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.903 [2024-07-24 18:02:47.703642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.903 [2024-07-24 18:02:47.703690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.903 [2024-07-24 18:02:47.703700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.903 [2024-07-24 18:02:47.703706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.703726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.703750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.703772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.703818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.703828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.703834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.703854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.703877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.703898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.703948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.703957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.703963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.703983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.703995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.704005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.704026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.704076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.704085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.704091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.704110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.704133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.704155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.704203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.704215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.704221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.704241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.704283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.704310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.704359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.704368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.704374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.704395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.704419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.704441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.704502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.704513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.704521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.704545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.704573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.704598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.704642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.704652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.704659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.704678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.704702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.704725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.704777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.704787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.704794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.704814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.704838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.904 [2024-07-24 18:02:47.704861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.904 [2024-07-24 18:02:47.704905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.904 [2024-07-24 18:02:47.704914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.904 [2024-07-24 18:02:47.704920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.904 [2024-07-24 18:02:47.704941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.904 [2024-07-24 18:02:47.704955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.904 [2024-07-24 18:02:47.704964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.704987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.705034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.705044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.705050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.705071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.705094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.705117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.705167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.705186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.705193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.705213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.705237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.705282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.705327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.705337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.705343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.705364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.705388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.705411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.705461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.705471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.705477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.705498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.705521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.705544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.705594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.705604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.705611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.705631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.705654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.705676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.705722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.705731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.705738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.705758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.705781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.705803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.705855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.705874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.705881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.705901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.705914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.705924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.705947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.706000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.706009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.706014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.706033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.706054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.706076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.706127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.706136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.706142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.706161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.706182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.706204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.706263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.706277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.706284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.706304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706316] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.706326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.706351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.706401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.706410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.706416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.706435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.905 [2024-07-24 18:02:47.706458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.905 [2024-07-24 18:02:47.706483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.905 [2024-07-24 18:02:47.706530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.905 [2024-07-24 18:02:47.706544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.905 [2024-07-24 18:02:47.706550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.905 [2024-07-24 18:02:47.706556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.905 [2024-07-24 18:02:47.706570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.706593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.706615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.706668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.706681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.706687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.706707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.706729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.706751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.706797] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.706806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.706812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.706831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.706853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.706875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.706924] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.706933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.706939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.706958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.706972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.706982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.707004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.707053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.707071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.707078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.707115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.707138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.707161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.707208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.707222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.707229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.707265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.707292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.707319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.707377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.707387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.707393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.707414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.707436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.707460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.707507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.707522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.707529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.707548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.707571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.707594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.707643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.707652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.707658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.707677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.707700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.707723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.707774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.707783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.707789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.707809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.707832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.707855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.707906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.707920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.707926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.707947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.707960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.707971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.707993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.708041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.708050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.906 [2024-07-24 18:02:47.708056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.708062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.906 [2024-07-24 18:02:47.708076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.708083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.906 [2024-07-24 18:02:47.708089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.906 [2024-07-24 18:02:47.708098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.906 [2024-07-24 18:02:47.708120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.906 [2024-07-24 18:02:47.708172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.906 [2024-07-24 18:02:47.708186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.708192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.708213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.708234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.708275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.708323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.708336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.708343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.708363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.708386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.708408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.708459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.708468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.708475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.708496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.708519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.708544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.708592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.708601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.708608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.708628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.708652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.708676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.708729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.708738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.708745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.708765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.708788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.708811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.708874] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.708884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.708890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.708909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.708921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.708931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.708954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.709005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.709015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.709021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.709041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.709064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.709087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.709140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.709149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.709155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.709175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709181] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.709197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.709220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.709282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.709293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.709299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.709320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.709342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.709368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.709418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.709427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.709433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.709453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.709475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.709497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.709563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.709577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.709583] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.709603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.709625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.709649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.709702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.907 [2024-07-24 18:02:47.709715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.907 [2024-07-24 18:02:47.709721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.907 [2024-07-24 18:02:47.709741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.907 [2024-07-24 18:02:47.709754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.907 [2024-07-24 18:02:47.709764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.907 [2024-07-24 18:02:47.709788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.907 [2024-07-24 18:02:47.709839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.908 [2024-07-24 18:02:47.709864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.908 [2024-07-24 18:02:47.709871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.709878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.908 [2024-07-24 18:02:47.709891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.709898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.709905] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.908 [2024-07-24 18:02:47.709915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.908 [2024-07-24 18:02:47.709940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.908 [2024-07-24 18:02:47.709988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.908 [2024-07-24 18:02:47.710002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.908 [2024-07-24 18:02:47.710009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.710015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.908 [2024-07-24 18:02:47.710028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.710034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.710040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.908 [2024-07-24 18:02:47.710050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.908 [2024-07-24 18:02:47.710074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.908 [2024-07-24 18:02:47.710121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.908 [2024-07-24 18:02:47.710131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.908 [2024-07-24 18:02:47.710137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.710143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.908 [2024-07-24 18:02:47.710156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.710162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.710168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.908 [2024-07-24 18:02:47.710178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.908 [2024-07-24 18:02:47.710201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.908 [2024-07-24 18:02:47.714296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.908 [2024-07-24 18:02:47.714347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.908 [2024-07-24 18:02:47.714355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.714363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.908 [2024-07-24 18:02:47.714390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.714397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.714404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe67a60) 00:15:40.908 [2024-07-24 18:02:47.714420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.908 [2024-07-24 18:02:47.714475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeaacc0, cid 3, qid 0 00:15:40.908 [2024-07-24 18:02:47.714545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.908 [2024-07-24 18:02:47.714554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.908 [2024-07-24 18:02:47.714560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.908 [2024-07-24 18:02:47.714566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeaacc0) on tqpair=0xe67a60 00:15:40.908 [2024-07-24 18:02:47.714577] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 11 milliseconds 00:15:40.908 00:15:40.908 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:40.908 [2024-07-24 18:02:47.762902] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:40.908 [2024-07-24 18:02:47.762964] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86025 ] 00:15:41.170 [2024-07-24 18:02:47.907905] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:41.170 [2024-07-24 18:02:47.907999] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:41.170 [2024-07-24 18:02:47.908006] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:41.170 [2024-07-24 18:02:47.908022] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:41.170 [2024-07-24 18:02:47.908035] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:41.170 [2024-07-24 18:02:47.908198] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:41.170 [2024-07-24 18:02:47.908256] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2261a60 0 00:15:41.170 [2024-07-24 18:02:47.921286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:41.170 [2024-07-24 18:02:47.921328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:41.170 [2024-07-24 18:02:47.921335] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:41.170 [2024-07-24 18:02:47.921340] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:41.170 [2024-07-24 18:02:47.921405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.921412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.921417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.170 [2024-07-24 18:02:47.921435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:41.170 [2024-07-24 18:02:47.921479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.170 [2024-07-24 18:02:47.929280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.170 [2024-07-24 18:02:47.929312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.170 [2024-07-24 18:02:47.929318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.929325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.170 [2024-07-24 18:02:47.929340] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:41.170 [2024-07-24 18:02:47.929352] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:41.170 [2024-07-24 18:02:47.929359] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:41.170 [2024-07-24 18:02:47.929385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.929391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.929395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.170 [2024-07-24 18:02:47.929410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.170 [2024-07-24 18:02:47.929451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.170 [2024-07-24 18:02:47.929543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.170 [2024-07-24 18:02:47.929550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.170 [2024-07-24 18:02:47.929554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.929559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.170 [2024-07-24 18:02:47.929566] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:41.170 [2024-07-24 18:02:47.929574] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:41.170 [2024-07-24 18:02:47.929583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.929588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.929593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.170 [2024-07-24 18:02:47.929600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.170 [2024-07-24 18:02:47.929618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.170 [2024-07-24 18:02:47.930048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.170 [2024-07-24 18:02:47.930066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.170 [2024-07-24 18:02:47.930071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.930076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.170 [2024-07-24 18:02:47.930082] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:41.170 [2024-07-24 18:02:47.930092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:41.170 [2024-07-24 18:02:47.930101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.930106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.170 [2024-07-24 18:02:47.930110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.170 [2024-07-24 18:02:47.930118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.170 [2024-07-24 18:02:47.930137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.171 [2024-07-24 18:02:47.930273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.171 [2024-07-24 18:02:47.930281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.171 [2024-07-24 18:02:47.930285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.930290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.171 [2024-07-24 18:02:47.930296] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:41.171 [2024-07-24 18:02:47.930307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.930312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.930317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.930324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.171 [2024-07-24 18:02:47.930342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.171 [2024-07-24 18:02:47.930679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.171 [2024-07-24 18:02:47.930691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.171 [2024-07-24 18:02:47.930696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.930701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.171 [2024-07-24 18:02:47.930707] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:41.171 [2024-07-24 18:02:47.930713] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:41.171 [2024-07-24 18:02:47.930722] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:41.171 [2024-07-24 18:02:47.930829] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:41.171 [2024-07-24 18:02:47.930837] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:41.171 [2024-07-24 18:02:47.930849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.930854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.930858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.930865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.171 [2024-07-24 18:02:47.930885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.171 [2024-07-24 18:02:47.935283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.171 [2024-07-24 18:02:47.935317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.171 [2024-07-24 18:02:47.935337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.935345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.171 [2024-07-24 18:02:47.935354] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:41.171 [2024-07-24 18:02:47.935383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.935388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.935393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.935405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.171 [2024-07-24 18:02:47.935444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.171 [2024-07-24 18:02:47.935763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.171 [2024-07-24 18:02:47.935776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.171 [2024-07-24 18:02:47.935780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.935785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.171 [2024-07-24 18:02:47.935792] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:41.171 [2024-07-24 18:02:47.935798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:41.171 [2024-07-24 18:02:47.935807] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:41.171 [2024-07-24 18:02:47.935824] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:41.171 [2024-07-24 18:02:47.935841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.935846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.935855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.171 [2024-07-24 18:02:47.935874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.171 [2024-07-24 18:02:47.936295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.171 [2024-07-24 18:02:47.936312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.171 [2024-07-24 18:02:47.936317] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936322] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2261a60): datao=0, datal=4096, cccid=0 00:15:41.171 [2024-07-24 18:02:47.936328] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a4840) on tqpair(0x2261a60): expected_datao=0, payload_size=4096 00:15:41.171 [2024-07-24 18:02:47.936335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936344] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936349] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.171 [2024-07-24 18:02:47.936366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.171 [2024-07-24 18:02:47.936370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.171 [2024-07-24 18:02:47.936387] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:41.171 [2024-07-24 18:02:47.936393] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:41.171 [2024-07-24 18:02:47.936399] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:41.171 [2024-07-24 18:02:47.936409] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:41.171 [2024-07-24 18:02:47.936416] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:41.171 [2024-07-24 18:02:47.936422] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:41.171 [2024-07-24 18:02:47.936432] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:41.171 [2024-07-24 18:02:47.936441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.936460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:41.171 [2024-07-24 18:02:47.936481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.171 [2024-07-24 18:02:47.936782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.171 [2024-07-24 18:02:47.936795] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.171 [2024-07-24 18:02:47.936800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.171 [2024-07-24 18:02:47.936814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.936830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.171 [2024-07-24 18:02:47.936838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.936854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.171 [2024-07-24 18:02:47.936861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.936876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.171 [2024-07-24 18:02:47.936884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.936899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.171 [2024-07-24 18:02:47.936905] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:41.171 [2024-07-24 18:02:47.936915] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:41.171 [2024-07-24 18:02:47.936923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.171 [2024-07-24 18:02:47.936928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2261a60) 00:15:41.171 [2024-07-24 18:02:47.936935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.171 [2024-07-24 18:02:47.936960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4840, cid 0, qid 0 00:15:41.171 [2024-07-24 18:02:47.936967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a49c0, cid 1, qid 0 00:15:41.172 [2024-07-24 18:02:47.936973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4b40, cid 2, qid 0 00:15:41.172 [2024-07-24 18:02:47.936978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.172 [2024-07-24 18:02:47.936984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4e40, cid 4, qid 0 00:15:41.172 [2024-07-24 18:02:47.937411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.172 [2024-07-24 18:02:47.937425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.172 [2024-07-24 18:02:47.937430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.937435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4e40) on tqpair=0x2261a60 00:15:41.172 [2024-07-24 18:02:47.937441] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:41.172 [2024-07-24 18:02:47.937447] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.937458] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.937466] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.937473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.937478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.937483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2261a60) 00:15:41.172 [2024-07-24 18:02:47.937490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:41.172 [2024-07-24 18:02:47.937509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4e40, cid 4, qid 0 00:15:41.172 [2024-07-24 18:02:47.937867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.172 [2024-07-24 18:02:47.937879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.172 [2024-07-24 18:02:47.937884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.937889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4e40) on tqpair=0x2261a60 00:15:41.172 [2024-07-24 18:02:47.937963] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.937975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.937985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.937990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2261a60) 00:15:41.172 [2024-07-24 18:02:47.937997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.172 [2024-07-24 18:02:47.938017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4e40, cid 4, qid 0 00:15:41.172 [2024-07-24 18:02:47.938388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.172 [2024-07-24 18:02:47.938401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.172 [2024-07-24 18:02:47.938406] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938411] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2261a60): datao=0, datal=4096, cccid=4 00:15:41.172 [2024-07-24 18:02:47.938417] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a4e40) on tqpair(0x2261a60): expected_datao=0, payload_size=4096 00:15:41.172 [2024-07-24 18:02:47.938422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938430] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938435] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.172 [2024-07-24 18:02:47.938452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.172 [2024-07-24 18:02:47.938456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4e40) on tqpair=0x2261a60 00:15:41.172 [2024-07-24 18:02:47.938473] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:41.172 [2024-07-24 18:02:47.938486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.938497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.938505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2261a60) 00:15:41.172 [2024-07-24 18:02:47.938518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.172 [2024-07-24 18:02:47.938538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4e40, cid 4, qid 0 00:15:41.172 [2024-07-24 18:02:47.938905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.172 [2024-07-24 18:02:47.938917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.172 [2024-07-24 18:02:47.938922] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938927] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2261a60): datao=0, datal=4096, cccid=4 00:15:41.172 [2024-07-24 18:02:47.938933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a4e40) on tqpair(0x2261a60): expected_datao=0, payload_size=4096 00:15:41.172 [2024-07-24 18:02:47.938938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938946] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938951] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.172 [2024-07-24 18:02:47.938967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.172 [2024-07-24 18:02:47.938971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.938976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4e40) on tqpair=0x2261a60 00:15:41.172 [2024-07-24 18:02:47.938992] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939003] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2261a60) 00:15:41.172 [2024-07-24 18:02:47.939023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.172 [2024-07-24 18:02:47.939042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4e40, cid 4, qid 0 00:15:41.172 [2024-07-24 18:02:47.939472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.172 [2024-07-24 18:02:47.939487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.172 [2024-07-24 18:02:47.939492] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939497] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2261a60): datao=0, datal=4096, cccid=4 00:15:41.172 [2024-07-24 18:02:47.939503] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a4e40) on tqpair(0x2261a60): expected_datao=0, payload_size=4096 00:15:41.172 [2024-07-24 18:02:47.939509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939516] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939521] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.172 [2024-07-24 18:02:47.939537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.172 [2024-07-24 18:02:47.939541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4e40) on tqpair=0x2261a60 00:15:41.172 [2024-07-24 18:02:47.939555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939564] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939584] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939603] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:41.172 [2024-07-24 18:02:47.939609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:41.172 [2024-07-24 18:02:47.939616] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:41.172 [2024-07-24 18:02:47.939638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2261a60) 00:15:41.172 [2024-07-24 18:02:47.939650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.172 [2024-07-24 18:02:47.939659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.172 [2024-07-24 18:02:47.939668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2261a60) 00:15:41.172 [2024-07-24 18:02:47.939675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.172 [2024-07-24 18:02:47.939701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4e40, cid 4, qid 0 00:15:41.172 [2024-07-24 18:02:47.939707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4fc0, cid 5, qid 0 00:15:41.172 [2024-07-24 18:02:47.940111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.172 [2024-07-24 18:02:47.940124] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.940128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.940133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4e40) on tqpair=0x2261a60 00:15:41.173 [2024-07-24 18:02:47.940141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.173 [2024-07-24 18:02:47.940147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.940152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.940157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4fc0) on tqpair=0x2261a60 00:15:41.173 [2024-07-24 18:02:47.940168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.940173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2261a60) 00:15:41.173 [2024-07-24 18:02:47.940180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.173 [2024-07-24 18:02:47.940198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4fc0, cid 5, qid 0 00:15:41.173 [2024-07-24 18:02:47.940483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.173 [2024-07-24 18:02:47.940496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.940501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.940506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4fc0) on tqpair=0x2261a60 00:15:41.173 [2024-07-24 18:02:47.940518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.940523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2261a60) 00:15:41.173 [2024-07-24 18:02:47.940530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.173 [2024-07-24 18:02:47.940549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4fc0, cid 5, qid 0 00:15:41.173 [2024-07-24 18:02:47.940825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.173 [2024-07-24 18:02:47.940843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.940848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.940853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4fc0) on tqpair=0x2261a60 00:15:41.173 [2024-07-24 18:02:47.940865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.940870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2261a60) 00:15:41.173 [2024-07-24 18:02:47.940878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.173 [2024-07-24 18:02:47.940900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4fc0, cid 5, qid 0 00:15:41.173 [2024-07-24 18:02:47.941151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.173 [2024-07-24 18:02:47.941163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.941167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.941172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4fc0) on tqpair=0x2261a60 00:15:41.173 [2024-07-24 18:02:47.941196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.941202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2261a60) 00:15:41.173 [2024-07-24 18:02:47.941209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.173 [2024-07-24 18:02:47.941218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.941223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2261a60) 00:15:41.173 [2024-07-24 18:02:47.941230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.173 [2024-07-24 18:02:47.941238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2261a60) 00:15:41.173 [2024-07-24 18:02:47.945291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.173 [2024-07-24 18:02:47.945308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2261a60) 00:15:41.173 [2024-07-24 18:02:47.945321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.173 [2024-07-24 18:02:47.945370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4fc0, cid 5, qid 0 00:15:41.173 [2024-07-24 18:02:47.945378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4e40, cid 4, qid 0 00:15:41.173 [2024-07-24 18:02:47.945383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a5140, cid 6, qid 0 00:15:41.173 [2024-07-24 18:02:47.945389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a52c0, cid 7, qid 0 00:15:41.173 [2024-07-24 18:02:47.945820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.173 [2024-07-24 18:02:47.945834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.173 [2024-07-24 18:02:47.945839] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945844] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2261a60): datao=0, datal=8192, cccid=5 00:15:41.173 [2024-07-24 18:02:47.945850] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a4fc0) on tqpair(0x2261a60): expected_datao=0, payload_size=8192 00:15:41.173 [2024-07-24 18:02:47.945856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945877] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945886] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.173 [2024-07-24 18:02:47.945908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.173 [2024-07-24 18:02:47.945913] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2261a60): datao=0, datal=512, cccid=4 00:15:41.173 [2024-07-24 18:02:47.945923] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a4e40) on tqpair(0x2261a60): expected_datao=0, payload_size=512 00:15:41.173 [2024-07-24 18:02:47.945929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945936] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945941] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.173 [2024-07-24 18:02:47.945954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.173 [2024-07-24 18:02:47.945958] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945963] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2261a60): datao=0, datal=512, cccid=6 00:15:41.173 [2024-07-24 18:02:47.945968] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a5140) on tqpair(0x2261a60): expected_datao=0, payload_size=512 00:15:41.173 [2024-07-24 18:02:47.945974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945981] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945986] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.945992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.173 [2024-07-24 18:02:47.945999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.173 [2024-07-24 18:02:47.946003] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.946007] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2261a60): datao=0, datal=4096, cccid=7 00:15:41.173 [2024-07-24 18:02:47.946013] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a52c0) on tqpair(0x2261a60): expected_datao=0, payload_size=4096 00:15:41.173 [2024-07-24 18:02:47.946018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.946026] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.946031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.946037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.173 [2024-07-24 18:02:47.946044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.946048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.946053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4fc0) on tqpair=0x2261a60 00:15:41.173 [2024-07-24 18:02:47.946081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.173 [2024-07-24 18:02:47.946088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.946093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.946097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4e40) on tqpair=0x2261a60 00:15:41.173 [2024-07-24 18:02:47.946112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.173 [2024-07-24 18:02:47.946118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.946123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.946127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a5140) on tqpair=0x2261a60 00:15:41.173 [2024-07-24 18:02:47.946136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.173 [2024-07-24 18:02:47.946142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.173 [2024-07-24 18:02:47.946147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.173 [2024-07-24 18:02:47.946151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a52c0) on tqpair=0x2261a60 00:15:41.173 ===================================================== 00:15:41.173 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.173 ===================================================== 00:15:41.173 Controller Capabilities/Features 00:15:41.173 ================================ 00:15:41.173 Vendor ID: 8086 00:15:41.173 Subsystem Vendor ID: 8086 00:15:41.173 Serial Number: SPDK00000000000001 00:15:41.173 Model Number: SPDK bdev Controller 00:15:41.173 Firmware Version: 24.09 00:15:41.173 Recommended Arb Burst: 6 00:15:41.173 IEEE OUI Identifier: e4 d2 5c 00:15:41.173 Multi-path I/O 00:15:41.173 May have multiple subsystem ports: Yes 00:15:41.173 May have multiple controllers: Yes 00:15:41.173 Associated with SR-IOV VF: No 00:15:41.173 Max Data Transfer Size: 131072 00:15:41.173 Max Number of Namespaces: 32 00:15:41.174 Max Number of I/O Queues: 127 00:15:41.174 NVMe Specification Version (VS): 1.3 00:15:41.174 NVMe Specification Version (Identify): 1.3 00:15:41.174 Maximum Queue Entries: 128 00:15:41.174 Contiguous Queues Required: Yes 00:15:41.174 Arbitration Mechanisms Supported 00:15:41.174 Weighted Round Robin: Not Supported 00:15:41.174 Vendor Specific: Not Supported 00:15:41.174 Reset Timeout: 15000 ms 00:15:41.174 Doorbell Stride: 4 bytes 00:15:41.174 NVM Subsystem Reset: Not Supported 00:15:41.174 Command Sets Supported 00:15:41.174 NVM Command Set: Supported 00:15:41.174 Boot Partition: Not Supported 00:15:41.174 Memory Page Size Minimum: 4096 bytes 00:15:41.174 Memory Page Size Maximum: 4096 bytes 00:15:41.174 Persistent Memory Region: Not Supported 00:15:41.174 Optional Asynchronous Events Supported 00:15:41.174 Namespace Attribute Notices: Supported 00:15:41.174 Firmware Activation Notices: Not Supported 00:15:41.174 ANA Change Notices: Not Supported 00:15:41.174 PLE Aggregate Log Change Notices: Not Supported 00:15:41.174 LBA Status Info Alert Notices: Not Supported 00:15:41.174 EGE Aggregate Log Change Notices: Not Supported 00:15:41.174 Normal NVM Subsystem Shutdown event: Not Supported 00:15:41.174 Zone Descriptor Change Notices: Not Supported 00:15:41.174 Discovery Log Change Notices: Not Supported 00:15:41.174 Controller Attributes 00:15:41.174 128-bit Host Identifier: Supported 00:15:41.174 Non-Operational Permissive Mode: Not Supported 00:15:41.174 NVM Sets: Not Supported 00:15:41.174 Read Recovery Levels: Not Supported 00:15:41.174 Endurance Groups: Not Supported 00:15:41.174 Predictable Latency Mode: Not Supported 00:15:41.174 Traffic Based Keep ALive: Not Supported 00:15:41.174 Namespace Granularity: Not Supported 00:15:41.174 SQ Associations: Not Supported 00:15:41.174 UUID List: Not Supported 00:15:41.174 Multi-Domain Subsystem: Not Supported 00:15:41.174 Fixed Capacity Management: Not Supported 00:15:41.174 Variable Capacity Management: Not Supported 00:15:41.174 Delete Endurance Group: Not Supported 00:15:41.174 Delete NVM Set: Not Supported 00:15:41.174 Extended LBA Formats Supported: Not Supported 00:15:41.174 Flexible Data Placement Supported: Not Supported 00:15:41.174 00:15:41.174 Controller Memory Buffer Support 00:15:41.174 ================================ 00:15:41.174 Supported: No 00:15:41.174 00:15:41.174 Persistent Memory Region Support 00:15:41.174 ================================ 00:15:41.174 Supported: No 00:15:41.174 00:15:41.174 Admin Command Set Attributes 00:15:41.174 ============================ 00:15:41.174 Security Send/Receive: Not Supported 00:15:41.174 Format NVM: Not Supported 00:15:41.174 Firmware Activate/Download: Not Supported 00:15:41.174 Namespace Management: Not Supported 00:15:41.174 Device Self-Test: Not Supported 00:15:41.174 Directives: Not Supported 00:15:41.174 NVMe-MI: Not Supported 00:15:41.174 Virtualization Management: Not Supported 00:15:41.174 Doorbell Buffer Config: Not Supported 00:15:41.174 Get LBA Status Capability: Not Supported 00:15:41.174 Command & Feature Lockdown Capability: Not Supported 00:15:41.174 Abort Command Limit: 4 00:15:41.174 Async Event Request Limit: 4 00:15:41.174 Number of Firmware Slots: N/A 00:15:41.174 Firmware Slot 1 Read-Only: N/A 00:15:41.174 Firmware Activation Without Reset: N/A 00:15:41.174 Multiple Update Detection Support: N/A 00:15:41.174 Firmware Update Granularity: No Information Provided 00:15:41.174 Per-Namespace SMART Log: No 00:15:41.174 Asymmetric Namespace Access Log Page: Not Supported 00:15:41.174 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:41.174 Command Effects Log Page: Supported 00:15:41.174 Get Log Page Extended Data: Supported 00:15:41.174 Telemetry Log Pages: Not Supported 00:15:41.174 Persistent Event Log Pages: Not Supported 00:15:41.174 Supported Log Pages Log Page: May Support 00:15:41.174 Commands Supported & Effects Log Page: Not Supported 00:15:41.174 Feature Identifiers & Effects Log Page:May Support 00:15:41.174 NVMe-MI Commands & Effects Log Page: May Support 00:15:41.174 Data Area 4 for Telemetry Log: Not Supported 00:15:41.174 Error Log Page Entries Supported: 128 00:15:41.174 Keep Alive: Supported 00:15:41.174 Keep Alive Granularity: 10000 ms 00:15:41.174 00:15:41.174 NVM Command Set Attributes 00:15:41.174 ========================== 00:15:41.174 Submission Queue Entry Size 00:15:41.174 Max: 64 00:15:41.174 Min: 64 00:15:41.174 Completion Queue Entry Size 00:15:41.174 Max: 16 00:15:41.174 Min: 16 00:15:41.174 Number of Namespaces: 32 00:15:41.174 Compare Command: Supported 00:15:41.174 Write Uncorrectable Command: Not Supported 00:15:41.174 Dataset Management Command: Supported 00:15:41.174 Write Zeroes Command: Supported 00:15:41.174 Set Features Save Field: Not Supported 00:15:41.174 Reservations: Supported 00:15:41.174 Timestamp: Not Supported 00:15:41.174 Copy: Supported 00:15:41.174 Volatile Write Cache: Present 00:15:41.174 Atomic Write Unit (Normal): 1 00:15:41.174 Atomic Write Unit (PFail): 1 00:15:41.174 Atomic Compare & Write Unit: 1 00:15:41.174 Fused Compare & Write: Supported 00:15:41.174 Scatter-Gather List 00:15:41.174 SGL Command Set: Supported 00:15:41.174 SGL Keyed: Supported 00:15:41.174 SGL Bit Bucket Descriptor: Not Supported 00:15:41.174 SGL Metadata Pointer: Not Supported 00:15:41.174 Oversized SGL: Not Supported 00:15:41.174 SGL Metadata Address: Not Supported 00:15:41.174 SGL Offset: Supported 00:15:41.174 Transport SGL Data Block: Not Supported 00:15:41.174 Replay Protected Memory Block: Not Supported 00:15:41.174 00:15:41.174 Firmware Slot Information 00:15:41.174 ========================= 00:15:41.174 Active slot: 1 00:15:41.174 Slot 1 Firmware Revision: 24.09 00:15:41.174 00:15:41.174 00:15:41.174 Commands Supported and Effects 00:15:41.174 ============================== 00:15:41.174 Admin Commands 00:15:41.174 -------------- 00:15:41.174 Get Log Page (02h): Supported 00:15:41.174 Identify (06h): Supported 00:15:41.174 Abort (08h): Supported 00:15:41.174 Set Features (09h): Supported 00:15:41.174 Get Features (0Ah): Supported 00:15:41.174 Asynchronous Event Request (0Ch): Supported 00:15:41.174 Keep Alive (18h): Supported 00:15:41.174 I/O Commands 00:15:41.174 ------------ 00:15:41.174 Flush (00h): Supported LBA-Change 00:15:41.174 Write (01h): Supported LBA-Change 00:15:41.174 Read (02h): Supported 00:15:41.174 Compare (05h): Supported 00:15:41.174 Write Zeroes (08h): Supported LBA-Change 00:15:41.174 Dataset Management (09h): Supported LBA-Change 00:15:41.174 Copy (19h): Supported LBA-Change 00:15:41.174 00:15:41.174 Error Log 00:15:41.174 ========= 00:15:41.174 00:15:41.174 Arbitration 00:15:41.174 =========== 00:15:41.174 Arbitration Burst: 1 00:15:41.174 00:15:41.174 Power Management 00:15:41.174 ================ 00:15:41.174 Number of Power States: 1 00:15:41.174 Current Power State: Power State #0 00:15:41.174 Power State #0: 00:15:41.174 Max Power: 0.00 W 00:15:41.174 Non-Operational State: Operational 00:15:41.174 Entry Latency: Not Reported 00:15:41.174 Exit Latency: Not Reported 00:15:41.174 Relative Read Throughput: 0 00:15:41.174 Relative Read Latency: 0 00:15:41.174 Relative Write Throughput: 0 00:15:41.174 Relative Write Latency: 0 00:15:41.174 Idle Power: Not Reported 00:15:41.174 Active Power: Not Reported 00:15:41.174 Non-Operational Permissive Mode: Not Supported 00:15:41.174 00:15:41.174 Health Information 00:15:41.174 ================== 00:15:41.174 Critical Warnings: 00:15:41.174 Available Spare Space: OK 00:15:41.174 Temperature: OK 00:15:41.174 Device Reliability: OK 00:15:41.174 Read Only: No 00:15:41.174 Volatile Memory Backup: OK 00:15:41.174 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:41.174 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:41.174 Available Spare: 0% 00:15:41.174 Available Spare Threshold: 0% 00:15:41.174 Life Percentage Used:[2024-07-24 18:02:47.946288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.174 [2024-07-24 18:02:47.946295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2261a60) 00:15:41.174 [2024-07-24 18:02:47.946303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.174 [2024-07-24 18:02:47.946328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a52c0, cid 7, qid 0 00:15:41.174 [2024-07-24 18:02:47.946712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.174 [2024-07-24 18:02:47.946725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.174 [2024-07-24 18:02:47.946730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.946735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a52c0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.946798] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:41.175 [2024-07-24 18:02:47.946811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4840) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.175 [2024-07-24 18:02:47.946826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a49c0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.946832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.175 [2024-07-24 18:02:47.946838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4b40) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.946844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.175 [2024-07-24 18:02:47.946850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.946856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.175 [2024-07-24 18:02:47.946867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.946872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.946876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.175 [2024-07-24 18:02:47.946884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.175 [2024-07-24 18:02:47.946906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.175 [2024-07-24 18:02:47.947190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.175 [2024-07-24 18:02:47.947205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.175 [2024-07-24 18:02:47.947210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.947223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.175 [2024-07-24 18:02:47.947250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.175 [2024-07-24 18:02:47.947274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.175 [2024-07-24 18:02:47.947576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.175 [2024-07-24 18:02:47.947589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.175 [2024-07-24 18:02:47.947594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.947605] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:41.175 [2024-07-24 18:02:47.947611] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:41.175 [2024-07-24 18:02:47.947622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.175 [2024-07-24 18:02:47.947639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.175 [2024-07-24 18:02:47.947657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.175 [2024-07-24 18:02:47.947951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.175 [2024-07-24 18:02:47.947963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.175 [2024-07-24 18:02:47.947967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.947984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.947994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.175 [2024-07-24 18:02:47.948001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.175 [2024-07-24 18:02:47.948018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.175 [2024-07-24 18:02:47.948276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.175 [2024-07-24 18:02:47.948287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.175 [2024-07-24 18:02:47.948291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.948307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.175 [2024-07-24 18:02:47.948324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.175 [2024-07-24 18:02:47.948342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.175 [2024-07-24 18:02:47.948484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.175 [2024-07-24 18:02:47.948495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.175 [2024-07-24 18:02:47.948500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.948516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.175 [2024-07-24 18:02:47.948533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.175 [2024-07-24 18:02:47.948551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.175 [2024-07-24 18:02:47.948769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.175 [2024-07-24 18:02:47.948783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.175 [2024-07-24 18:02:47.948787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.948803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.948812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.175 [2024-07-24 18:02:47.948820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.175 [2024-07-24 18:02:47.948837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.175 [2024-07-24 18:02:47.953271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.175 [2024-07-24 18:02:47.953295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.175 [2024-07-24 18:02:47.953300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.953307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.175 [2024-07-24 18:02:47.953321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.953326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.175 [2024-07-24 18:02:47.953331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2261a60) 00:15:41.175 [2024-07-24 18:02:47.953341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.175 [2024-07-24 18:02:47.953368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a4cc0, cid 3, qid 0 00:15:41.175 [2024-07-24 18:02:47.955513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.175 [2024-07-24 18:02:47.955529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.175 [2024-07-24 18:02:47.955534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.176 [2024-07-24 18:02:47.955540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a4cc0) on tqpair=0x2261a60 00:15:41.176 [2024-07-24 18:02:47.955549] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:41.176 0% 00:15:41.176 Data Units Read: 0 00:15:41.176 Data Units Written: 0 00:15:41.176 Host Read Commands: 0 00:15:41.176 Host Write Commands: 0 00:15:41.176 Controller Busy Time: 0 minutes 00:15:41.176 Power Cycles: 0 00:15:41.176 Power On Hours: 0 hours 00:15:41.176 Unsafe Shutdowns: 0 00:15:41.176 Unrecoverable Media Errors: 0 00:15:41.176 Lifetime Error Log Entries: 0 00:15:41.176 Warning Temperature Time: 0 minutes 00:15:41.176 Critical Temperature Time: 0 minutes 00:15:41.176 00:15:41.176 Number of Queues 00:15:41.176 ================ 00:15:41.176 Number of I/O Submission Queues: 127 00:15:41.176 Number of I/O Completion Queues: 127 00:15:41.176 00:15:41.176 Active Namespaces 00:15:41.176 ================= 00:15:41.176 Namespace ID:1 00:15:41.176 Error Recovery Timeout: Unlimited 00:15:41.176 Command Set Identifier: NVM (00h) 00:15:41.176 Deallocate: Supported 00:15:41.176 Deallocated/Unwritten Error: Not Supported 00:15:41.176 Deallocated Read Value: Unknown 00:15:41.176 Deallocate in Write Zeroes: Not Supported 00:15:41.176 Deallocated Guard Field: 0xFFFF 00:15:41.176 Flush: Supported 00:15:41.176 Reservation: Supported 00:15:41.176 Namespace Sharing Capabilities: Multiple Controllers 00:15:41.176 Size (in LBAs): 131072 (0GiB) 00:15:41.176 Capacity (in LBAs): 131072 (0GiB) 00:15:41.176 Utilization (in LBAs): 131072 (0GiB) 00:15:41.176 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:41.176 EUI64: ABCDEF0123456789 00:15:41.176 UUID: e274b2a6-2ad9-4a18-b342-6541fe143a6e 00:15:41.176 Thin Provisioning: Not Supported 00:15:41.176 Per-NS Atomic Units: Yes 00:15:41.176 Atomic Boundary Size (Normal): 0 00:15:41.176 Atomic Boundary Size (PFail): 0 00:15:41.176 Atomic Boundary Offset: 0 00:15:41.176 Maximum Single Source Range Length: 65535 00:15:41.176 Maximum Copy Length: 65535 00:15:41.176 Maximum Source Range Count: 1 00:15:41.176 NGUID/EUI64 Never Reused: No 00:15:41.176 Namespace Write Protected: No 00:15:41.176 Number of LBA Formats: 1 00:15:41.176 Current LBA Format: LBA Format #00 00:15:41.176 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:41.176 00:15:41.176 18:02:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.176 rmmod nvme_tcp 00:15:41.176 rmmod nvme_fabrics 00:15:41.176 rmmod nvme_keyring 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 85965 ']' 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 85965 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 85965 ']' 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 85965 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85965 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.176 killing process with pid 85965 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85965' 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 85965 00:15:41.176 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 85965 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:41.433 00:15:41.433 real 0m2.696s 00:15:41.433 user 0m7.622s 00:15:41.433 sys 0m0.714s 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.433 18:02:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:41.434 ************************************ 00:15:41.434 END TEST nvmf_identify 00:15:41.434 ************************************ 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.691 ************************************ 00:15:41.691 START TEST nvmf_perf 00:15:41.691 ************************************ 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:41.691 * Looking for test storage... 00:15:41.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.691 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:41.692 Cannot find device "nvmf_tgt_br" 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.692 Cannot find device "nvmf_tgt_br2" 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:41.692 Cannot find device "nvmf_tgt_br" 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:41.692 Cannot find device "nvmf_tgt_br2" 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:41.692 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:41.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:15:41.950 00:15:41.950 --- 10.0.0.2 ping statistics --- 00:15:41.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.950 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:41.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:41.950 00:15:41.950 --- 10.0.0.3 ping statistics --- 00:15:41.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.950 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:15:41.950 00:15:41.950 --- 10.0.0.1 ping statistics --- 00:15:41.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.950 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:41.950 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86195 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86195 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 86195 ']' 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.209 18:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.209 [2024-07-24 18:02:49.003955] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:42.209 [2024-07-24 18:02:49.004080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.209 [2024-07-24 18:02:49.140821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.467 [2024-07-24 18:02:49.250457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.467 [2024-07-24 18:02:49.250523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.467 [2024-07-24 18:02:49.250534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.467 [2024-07-24 18:02:49.250544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.467 [2024-07-24 18:02:49.250570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.467 [2024-07-24 18:02:49.250750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.467 [2024-07-24 18:02:49.251688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.467 [2024-07-24 18:02:49.251846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.467 [2024-07-24 18:02:49.251846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.034 18:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.034 18:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:15:43.034 18:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.034 18:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:43.034 18:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:43.300 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.300 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:43.300 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:43.556 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:43.556 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:43.814 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:43.814 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:44.071 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:44.072 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:44.072 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:44.072 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:44.072 18:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:44.330 [2024-07-24 18:02:51.192777] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.330 18:02:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.619 18:02:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:44.619 18:02:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.877 18:02:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:44.877 18:02:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:45.135 18:02:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.393 [2024-07-24 18:02:52.162284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.393 18:02:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:45.652 18:02:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:45.652 18:02:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:45.652 18:02:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:45.652 18:02:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:46.594 Initializing NVMe Controllers 00:15:46.594 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:46.594 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:46.594 Initialization complete. Launching workers. 00:15:46.594 ======================================================== 00:15:46.594 Latency(us) 00:15:46.594 Device Information : IOPS MiB/s Average min max 00:15:46.594 PCIE (0000:00:10.0) NSID 1 from core 0: 23542.82 91.96 1358.19 248.72 8288.21 00:15:46.594 ======================================================== 00:15:46.594 Total : 23542.82 91.96 1358.19 248.72 8288.21 00:15:46.594 00:15:46.594 18:02:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:47.969 Initializing NVMe Controllers 00:15:47.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:47.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:47.969 Initialization complete. Launching workers. 00:15:47.969 ======================================================== 00:15:47.969 Latency(us) 00:15:47.969 Device Information : IOPS MiB/s Average min max 00:15:47.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3765.92 14.71 265.26 96.62 4292.79 00:15:47.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8194.05 6011.24 12055.12 00:15:47.969 ======================================================== 00:15:47.969 Total : 3888.91 15.19 516.03 96.62 12055.12 00:15:47.969 00:15:47.969 18:02:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:49.396 Initializing NVMe Controllers 00:15:49.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:49.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:49.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:49.396 Initialization complete. Launching workers. 00:15:49.396 ======================================================== 00:15:49.396 Latency(us) 00:15:49.396 Device Information : IOPS MiB/s Average min max 00:15:49.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9537.70 37.26 3355.34 625.52 7090.37 00:15:49.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2705.84 10.57 11943.61 5759.79 24421.13 00:15:49.396 ======================================================== 00:15:49.396 Total : 12243.53 47.83 5253.35 625.52 24421.13 00:15:49.396 00:15:49.396 18:02:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:49.396 18:02:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:51.979 Initializing NVMe Controllers 00:15:51.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.979 Controller IO queue size 128, less than required. 00:15:51.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.979 Controller IO queue size 128, less than required. 00:15:51.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:51.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:51.979 Initialization complete. Launching workers. 00:15:51.979 ======================================================== 00:15:51.979 Latency(us) 00:15:51.979 Device Information : IOPS MiB/s Average min max 00:15:51.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1435.80 358.95 90581.79 40722.84 160029.02 00:15:51.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.92 144.73 233389.57 86600.89 363813.18 00:15:51.979 ======================================================== 00:15:51.979 Total : 2014.71 503.68 131616.88 40722.84 363813.18 00:15:51.979 00:15:51.979 18:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:52.239 Initializing NVMe Controllers 00:15:52.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:52.239 Controller IO queue size 128, less than required. 00:15:52.239 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.239 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:52.239 Controller IO queue size 128, less than required. 00:15:52.239 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.239 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:52.239 WARNING: Some requested NVMe devices were skipped 00:15:52.239 No valid NVMe controllers or AIO or URING devices found 00:15:52.239 18:02:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:54.788 Initializing NVMe Controllers 00:15:54.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:54.788 Controller IO queue size 128, less than required. 00:15:54.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.788 Controller IO queue size 128, less than required. 00:15:54.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:54.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:54.788 Initialization complete. Launching workers. 00:15:54.788 00:15:54.788 ==================== 00:15:54.788 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:54.788 TCP transport: 00:15:54.788 polls: 8931 00:15:54.788 idle_polls: 4426 00:15:54.788 sock_completions: 4505 00:15:54.788 nvme_completions: 4723 00:15:54.789 submitted_requests: 7040 00:15:54.789 queued_requests: 1 00:15:54.789 00:15:54.789 ==================== 00:15:54.789 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:54.789 TCP transport: 00:15:54.789 polls: 11288 00:15:54.789 idle_polls: 7456 00:15:54.789 sock_completions: 3832 00:15:54.789 nvme_completions: 7005 00:15:54.789 submitted_requests: 10488 00:15:54.789 queued_requests: 1 00:15:54.789 ======================================================== 00:15:54.789 Latency(us) 00:15:54.789 Device Information : IOPS MiB/s Average min max 00:15:54.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1180.29 295.07 112264.04 66990.72 191253.64 00:15:54.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1750.69 437.67 73558.67 33115.20 122323.47 00:15:54.789 ======================================================== 00:15:54.789 Total : 2930.98 732.74 89145.12 33115.20 191253.64 00:15:54.789 00:15:54.789 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:54.789 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.125 18:03:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.125 rmmod nvme_tcp 00:15:55.125 rmmod nvme_fabrics 00:15:55.125 rmmod nvme_keyring 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86195 ']' 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86195 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 86195 ']' 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 86195 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86195 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:55.125 killing process with pid 86195 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86195' 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 86195 00:15:55.125 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 86195 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:56.077 00:15:56.077 real 0m14.399s 00:15:56.077 user 0m52.187s 00:15:56.077 sys 0m3.814s 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:56.077 ************************************ 00:15:56.077 END TEST nvmf_perf 00:15:56.077 ************************************ 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.077 ************************************ 00:15:56.077 START TEST nvmf_fio_host 00:15:56.077 ************************************ 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:56.077 * Looking for test storage... 00:15:56.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.077 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.078 18:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:56.078 Cannot find device "nvmf_tgt_br" 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.078 Cannot find device "nvmf_tgt_br2" 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:56.078 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:56.336 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:56.336 Cannot find device "nvmf_tgt_br" 00:15:56.336 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:56.336 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:56.336 Cannot find device "nvmf_tgt_br2" 00:15:56.336 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:56.336 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.336 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.337 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:56.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:15:56.595 00:15:56.595 --- 10.0.0.2 ping statistics --- 00:15:56.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.595 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:56.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:56.595 00:15:56.595 --- 10.0.0.3 ping statistics --- 00:15:56.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.595 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:56.595 00:15:56.595 --- 10.0.0.1 ping statistics --- 00:15:56.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.595 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=86670 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 86670 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 86670 ']' 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.595 18:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.595 [2024-07-24 18:03:03.450272] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:15:56.595 [2024-07-24 18:03:03.450399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.877 [2024-07-24 18:03:03.592635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.877 [2024-07-24 18:03:03.716027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.877 [2024-07-24 18:03:03.716110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.877 [2024-07-24 18:03:03.716123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.877 [2024-07-24 18:03:03.716133] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.877 [2024-07-24 18:03:03.716141] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.877 [2024-07-24 18:03:03.716274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.877 [2024-07-24 18:03:03.716994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.877 [2024-07-24 18:03:03.717196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.877 [2024-07-24 18:03:03.717197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.443 18:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.443 18:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:15:57.443 18:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:57.702 [2024-07-24 18:03:04.586271] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.702 18:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:57.702 18:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.702 18:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.702 18:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:57.960 Malloc1 00:15:57.960 18:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:58.218 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.476 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.733 [2024-07-24 18:03:05.661023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.733 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:59.336 18:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:59.336 18:03:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:59.336 18:03:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:59.336 18:03:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:59.336 18:03:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:59.336 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:59.336 fio-3.35 00:15:59.336 Starting 1 thread 00:16:01.860 00:16:01.860 test: (groupid=0, jobs=1): err= 0: pid=86803: Wed Jul 24 18:03:08 2024 00:16:01.860 read: IOPS=9069, BW=35.4MiB/s (37.1MB/s)(71.1MiB/2006msec) 00:16:01.860 slat (nsec): min=1772, max=218838, avg=2250.74, stdev=2125.80 00:16:01.860 clat (usec): min=2238, max=13798, avg=7357.62, stdev=669.07 00:16:01.860 lat (usec): min=2264, max=13800, avg=7359.87, stdev=668.82 00:16:01.860 clat percentiles (usec): 00:16:01.860 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6849], 00:16:01.860 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:16:01.860 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8455], 00:16:01.860 | 99.00th=[ 9503], 99.50th=[ 9896], 99.90th=[11994], 99.95th=[12911], 00:16:01.860 | 99.99th=[13173] 00:16:01.860 bw ( KiB/s): min=34696, max=37528, per=99.89%, avg=36238.00, stdev=1281.12, samples=4 00:16:01.860 iops : min= 8674, max= 9382, avg=9059.50, stdev=320.28, samples=4 00:16:01.860 write: IOPS=9080, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2006msec); 0 zone resets 00:16:01.860 slat (nsec): min=1832, max=139615, avg=2356.76, stdev=1275.75 00:16:01.860 clat (usec): min=1411, max=12926, avg=6687.09, stdev=590.38 00:16:01.860 lat (usec): min=1419, max=12929, avg=6689.45, stdev=590.22 00:16:01.860 clat percentiles (usec): 00:16:01.860 | 1.00th=[ 5669], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6259], 00:16:01.860 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6718], 00:16:01.860 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7308], 95.00th=[ 7635], 00:16:01.860 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[10421], 99.95th=[11338], 00:16:01.860 | 99.99th=[12911] 00:16:01.860 bw ( KiB/s): min=34944, max=37848, per=100.00%, avg=36326.00, stdev=1251.27, samples=4 00:16:01.860 iops : min= 8736, max= 9462, avg=9081.50, stdev=312.82, samples=4 00:16:01.860 lat (msec) : 2=0.03%, 4=0.15%, 10=99.51%, 20=0.31% 00:16:01.860 cpu : usr=64.69%, sys=26.93%, ctx=8, majf=0, minf=6 00:16:01.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:01.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.860 issued rwts: total=18193,18216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.860 00:16:01.860 Run status group 0 (all jobs): 00:16:01.860 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.1MiB (74.5MB), run=2006-2006msec 00:16:01.860 WRITE: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.6MB), run=2006-2006msec 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:01.860 18:03:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:01.860 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:01.860 fio-3.35 00:16:01.860 Starting 1 thread 00:16:04.388 00:16:04.388 test: (groupid=0, jobs=1): err= 0: pid=86854: Wed Jul 24 18:03:10 2024 00:16:04.388 read: IOPS=8380, BW=131MiB/s (137MB/s)(263MiB/2007msec) 00:16:04.388 slat (usec): min=2, max=124, avg= 3.42, stdev= 1.67 00:16:04.388 clat (usec): min=2655, max=17951, avg=8987.04, stdev=2150.33 00:16:04.388 lat (usec): min=2658, max=17954, avg=8990.46, stdev=2150.37 00:16:04.388 clat percentiles (usec): 00:16:04.388 | 1.00th=[ 4752], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 7046], 00:16:04.388 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:16:04.388 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11469], 95.00th=[12387], 00:16:04.388 | 99.00th=[14746], 99.50th=[15401], 99.90th=[17695], 99.95th=[17695], 00:16:04.388 | 99.99th=[17957] 00:16:04.388 bw ( KiB/s): min=56622, max=80448, per=51.71%, avg=69339.50, stdev=12175.19, samples=4 00:16:04.388 iops : min= 3538, max= 5028, avg=4333.50, stdev=761.25, samples=4 00:16:04.388 write: IOPS=5128, BW=80.1MiB/s (84.0MB/s)(142MiB/1770msec); 0 zone resets 00:16:04.388 slat (usec): min=32, max=203, avg=37.95, stdev= 4.46 00:16:04.388 clat (usec): min=5481, max=19106, avg=10954.00, stdev=1884.14 00:16:04.388 lat (usec): min=5516, max=19143, avg=10991.95, stdev=1884.31 00:16:04.388 clat percentiles (usec): 00:16:04.388 | 1.00th=[ 7504], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9372], 00:16:04.388 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:16:04.388 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13698], 95.00th=[14484], 00:16:04.388 | 99.00th=[15926], 99.50th=[16909], 99.90th=[18220], 99.95th=[18482], 00:16:04.388 | 99.99th=[19006] 00:16:04.388 bw ( KiB/s): min=60135, max=83072, per=87.86%, avg=72089.75, stdev=11928.45, samples=4 00:16:04.388 iops : min= 3758, max= 5192, avg=4505.50, stdev=745.67, samples=4 00:16:04.388 lat (msec) : 4=0.23%, 10=54.65%, 20=45.12% 00:16:04.388 cpu : usr=72.18%, sys=18.25%, ctx=18, majf=0, minf=26 00:16:04.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:04.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.388 issued rwts: total=16819,9077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.388 00:16:04.388 Run status group 0 (all jobs): 00:16:04.388 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2007-2007msec 00:16:04.388 WRITE: bw=80.1MiB/s (84.0MB/s), 80.1MiB/s-80.1MiB/s (84.0MB/s-84.0MB/s), io=142MiB (149MB), run=1770-1770msec 00:16:04.388 18:03:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:04.388 rmmod nvme_tcp 00:16:04.388 rmmod nvme_fabrics 00:16:04.388 rmmod nvme_keyring 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 86670 ']' 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 86670 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 86670 ']' 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 86670 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86670 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.388 killing process with pid 86670 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86670' 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 86670 00:16:04.388 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 86670 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:04.648 00:16:04.648 real 0m8.707s 00:16:04.648 user 0m35.181s 00:16:04.648 sys 0m2.508s 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.648 ************************************ 00:16:04.648 END TEST nvmf_fio_host 00:16:04.648 ************************************ 00:16:04.648 18:03:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.907 ************************************ 00:16:04.907 START TEST nvmf_failover 00:16:04.907 ************************************ 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:04.907 * Looking for test storage... 00:16:04.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:04.907 Cannot find device "nvmf_tgt_br" 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.907 Cannot find device "nvmf_tgt_br2" 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:04.907 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:04.907 Cannot find device "nvmf_tgt_br" 00:16:04.908 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:16:04.908 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:04.908 Cannot find device "nvmf_tgt_br2" 00:16:04.908 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:16:04.908 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.166 18:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.166 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:05.166 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:05.166 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:05.166 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:05.166 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:05.166 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:05.166 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.166 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.167 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.167 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:05.167 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:05.167 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.167 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.167 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:05.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:16:05.425 00:16:05.425 --- 10.0.0.2 ping statistics --- 00:16:05.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.425 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:05.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:05.425 00:16:05.425 --- 10.0.0.3 ping statistics --- 00:16:05.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.425 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:16:05.425 00:16:05.425 --- 10.0.0.1 ping statistics --- 00:16:05.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.425 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87068 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87068 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87068 ']' 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.425 18:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:05.426 [2024-07-24 18:03:12.261180] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:16:05.426 [2024-07-24 18:03:12.261885] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.684 [2024-07-24 18:03:12.404550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.684 [2024-07-24 18:03:12.540873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.684 [2024-07-24 18:03:12.541126] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.684 [2024-07-24 18:03:12.541253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.684 [2024-07-24 18:03:12.541420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.684 [2024-07-24 18:03:12.541452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.684 [2024-07-24 18:03:12.541742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.684 [2024-07-24 18:03:12.541824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.684 [2024-07-24 18:03:12.541915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.251 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.251 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:06.251 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.251 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.251 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:06.509 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.509 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:06.766 [2024-07-24 18:03:13.571622] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.766 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:07.025 Malloc0 00:16:07.025 18:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:07.283 18:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:07.541 18:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.799 [2024-07-24 18:03:14.663920] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.799 18:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:08.057 [2024-07-24 18:03:14.952362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:08.057 18:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:08.624 [2024-07-24 18:03:15.297051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87186 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87186 /var/tmp/bdevperf.sock 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87186 ']' 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.624 18:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:09.559 18:03:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.559 18:03:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:09.559 18:03:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:09.817 NVMe0n1 00:16:09.817 18:03:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:10.383 00:16:10.383 18:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87238 00:16:10.383 18:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:10.383 18:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:11.386 18:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.645 [2024-07-24 18:03:18.346879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.346939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.346950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.346960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.346970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.346980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.346991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.347994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.645 [2024-07-24 18:03:18.348887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.349557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.350888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.351021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.351154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.351289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.351407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.351418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.351428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 [2024-07-24 18:03:18.351561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bee50 is same with the state(5) to be set 00:16:11.646 18:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:14.937 18:03:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:14.937 00:16:14.937 18:03:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:15.195 [2024-07-24 18:03:21.981572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.195 [2024-07-24 18:03:21.981959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.981972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.981988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 [2024-07-24 18:03:21.982270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfbd0 is same with the state(5) to be set 00:16:15.196 18:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:18.538 18:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.538 [2024-07-24 18:03:25.268594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.538 18:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:19.472 18:03:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:19.730 [2024-07-24 18:03:26.614134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 [2024-07-24 18:03:26.614624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878f30 is same with the state(5) to be set 00:16:19.730 18:03:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87238 00:16:26.311 0 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87186 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87186 ']' 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87186 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87186 00:16:26.311 killing process with pid 87186 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87186' 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87186 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87186 00:16:26.311 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:26.311 [2024-07-24 18:03:15.378112] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:16:26.311 [2024-07-24 18:03:15.378230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87186 ] 00:16:26.311 [2024-07-24 18:03:15.511697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.311 [2024-07-24 18:03:15.647660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.311 Running I/O for 15 seconds... 00:16:26.311 [2024-07-24 18:03:18.352152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-24 18:03:18.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.311 [2024-07-24 18:03:18.352239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-24 18:03:18.352267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.311 [2024-07-24 18:03:18.352285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-24 18:03:18.352301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.311 [2024-07-24 18:03:18.352320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.352976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.352991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.312 [2024-07-24 18:03:18.353579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.312 [2024-07-24 18:03:18.353596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.312 [2024-07-24 18:03:18.353611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.353643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.353674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.353706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.353738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.353769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.353801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.353832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.353863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.353900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.353932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.353964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.353981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.353995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.313 [2024-07-24 18:03:18.354359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.313 [2024-07-24 18:03:18.354706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.313 [2024-07-24 18:03:18.354721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.354983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.354997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.314 [2024-07-24 18:03:18.355937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.314 [2024-07-24 18:03:18.355968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.355993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:18.356674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e48a0 is same with the state(5) to be set 00:16:26.315 [2024-07-24 18:03:18.356718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.315 [2024-07-24 18:03:18.356730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.315 [2024-07-24 18:03:18.356741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84456 len:8 PRP1 0x0 PRP2 0x0 00:16:26.315 [2024-07-24 18:03:18.356758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356825] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9e48a0 was disconnected and freed. reset controller. 00:16:26.315 [2024-07-24 18:03:18.356851] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:26.315 [2024-07-24 18:03:18.356917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.315 [2024-07-24 18:03:18.356935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.315 [2024-07-24 18:03:18.356967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.356983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.315 [2024-07-24 18:03:18.356998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.357013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.315 [2024-07-24 18:03:18.357028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:18.357043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:26.315 [2024-07-24 18:03:18.360449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.315 [2024-07-24 18:03:18.360499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x993e30 (9): Bad file descriptor 00:16:26.315 [2024-07-24 18:03:18.394920] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:26.315 [2024-07-24 18:03:21.983728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.983780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.983806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.983845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.983862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.983877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.983894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.983909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.983925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.983940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.983957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.983972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.983988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.984003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.984019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.984034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.984050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.984065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.984081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.984097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.984113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.984128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.315 [2024-07-24 18:03:21.984145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.315 [2024-07-24 18:03:21.984160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.984970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.984985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.985002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.985017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.985033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.985048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.985065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.985087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.985104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.985119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.985136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.985151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.985167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.316 [2024-07-24 18:03:21.985182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.316 [2024-07-24 18:03:21.985200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.985649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.985970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.985985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.986018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.986051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.986084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.317 [2024-07-24 18:03:21.986116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.986148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.986180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.986212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.986255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.986288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.986323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.317 [2024-07-24 18:03:21.986361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.317 [2024-07-24 18:03:21.986378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.986979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.986995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.987012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.987044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.987076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.987107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.987139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.318 [2024-07-24 18:03:21.987177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91064 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.318 [2024-07-24 18:03:21.987285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90160 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.318 [2024-07-24 18:03:21.987340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90168 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.318 [2024-07-24 18:03:21.987405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90176 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.318 [2024-07-24 18:03:21.987457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90184 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.318 [2024-07-24 18:03:21.987510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90192 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.318 [2024-07-24 18:03:21.987563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90200 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.318 [2024-07-24 18:03:21.987616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90208 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.318 [2024-07-24 18:03:21.987665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.318 [2024-07-24 18:03:21.987676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.318 [2024-07-24 18:03:21.987687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90216 len:8 PRP1 0x0 PRP2 0x0 00:16:26.318 [2024-07-24 18:03:21.987703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.987719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.987730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.987741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90224 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.987756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.987771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.987782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.987793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90232 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.987809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.987825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.987836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.987847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90240 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.987862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.987878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.987889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.987901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90248 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.987916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.987932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.987943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.987956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90256 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.987971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.987986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.987997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90264 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90272 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90280 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90288 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90296 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90304 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90312 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90320 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90328 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90336 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.319 [2024-07-24 18:03:21.988572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.319 [2024-07-24 18:03:21.988583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90344 len:8 PRP1 0x0 PRP2 0x0 00:16:26.319 [2024-07-24 18:03:21.988598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988680] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa09a50 was disconnected and freed. reset controller. 00:16:26.319 [2024-07-24 18:03:21.988700] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:26.319 [2024-07-24 18:03:21.988770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.319 [2024-07-24 18:03:21.988790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.319 [2024-07-24 18:03:21.988822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.319 [2024-07-24 18:03:21.988856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.319 [2024-07-24 18:03:21.988886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:21.988902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:26.319 [2024-07-24 18:03:21.988960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x993e30 (9): Bad file descriptor 00:16:26.319 [2024-07-24 18:03:21.992387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.319 [2024-07-24 18:03:22.025028] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:26.319 [2024-07-24 18:03:26.615692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.319 [2024-07-24 18:03:26.615745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:26.615772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.319 [2024-07-24 18:03:26.615788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:26.615806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.319 [2024-07-24 18:03:26.615821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:26.615864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.319 [2024-07-24 18:03:26.615880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.319 [2024-07-24 18:03:26.615897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.615912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.615929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.615944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.615961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.615976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.615992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.616969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.616986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.617001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.617019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.617034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.617051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.617066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.617083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.617098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.617115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.617131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.617148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.617163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.617188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.617204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.320 [2024-07-24 18:03:26.617221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.320 [2024-07-24 18:03:26.617237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.321 [2024-07-24 18:03:26.617570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.321 [2024-07-24 18:03:26.617602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.321 [2024-07-24 18:03:26.617644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.321 [2024-07-24 18:03:26.617676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.617969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.617986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.321 [2024-07-24 18:03:26.618325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.321 [2024-07-24 18:03:26.618340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.322 [2024-07-24 18:03:26.618702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.322 [2024-07-24 18:03:26.618735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.322 [2024-07-24 18:03:26.618768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.322 [2024-07-24 18:03:26.618801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.322 [2024-07-24 18:03:26.618833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.322 [2024-07-24 18:03:26.618872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.322 [2024-07-24 18:03:26.618906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.618971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.618989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.322 [2024-07-24 18:03:26.619652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.322 [2024-07-24 18:03:26.619667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.619684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.323 [2024-07-24 18:03:26.619700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.619717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.323 [2024-07-24 18:03:26.619732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.619755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.323 [2024-07-24 18:03:26.619771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.619788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.323 [2024-07-24 18:03:26.619803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.619823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.323 [2024-07-24 18:03:26.619839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.619856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.323 [2024-07-24 18:03:26.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.619914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.323 [2024-07-24 18:03:26.619928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45200 len:8 PRP1 0x0 PRP2 0x0 00:16:26.323 [2024-07-24 18:03:26.619943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.619964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.323 [2024-07-24 18:03:26.619976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.323 [2024-07-24 18:03:26.619990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45208 len:8 PRP1 0x0 PRP2 0x0 00:16:26.323 [2024-07-24 18:03:26.620005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.323 [2024-07-24 18:03:26.620031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.323 [2024-07-24 18:03:26.620043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45216 len:8 PRP1 0x0 PRP2 0x0 00:16:26.323 [2024-07-24 18:03:26.620059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.323 [2024-07-24 18:03:26.620086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.323 [2024-07-24 18:03:26.620098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45224 len:8 PRP1 0x0 PRP2 0x0 00:16:26.323 [2024-07-24 18:03:26.620112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.323 [2024-07-24 18:03:26.620139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.323 [2024-07-24 18:03:26.620150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45232 len:8 PRP1 0x0 PRP2 0x0 00:16:26.323 [2024-07-24 18:03:26.620166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.323 [2024-07-24 18:03:26.620193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.323 [2024-07-24 18:03:26.620211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45240 len:8 PRP1 0x0 PRP2 0x0 00:16:26.323 [2024-07-24 18:03:26.620227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.323 [2024-07-24 18:03:26.620264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.323 [2024-07-24 18:03:26.620276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45248 len:8 PRP1 0x0 PRP2 0x0 00:16:26.323 [2024-07-24 18:03:26.620291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620359] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa21080 was disconnected and freed. reset controller. 00:16:26.323 [2024-07-24 18:03:26.620380] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:26.323 [2024-07-24 18:03:26.620446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.323 [2024-07-24 18:03:26.620465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.323 [2024-07-24 18:03:26.620498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.323 [2024-07-24 18:03:26.620530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.323 [2024-07-24 18:03:26.620562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.323 [2024-07-24 18:03:26.620579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:26.323 [2024-07-24 18:03:26.620618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x993e30 (9): Bad file descriptor 00:16:26.323 [2024-07-24 18:03:26.624052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.323 [2024-07-24 18:03:26.656908] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:26.323 00:16:26.323 Latency(us) 00:16:26.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.323 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.323 Verification LBA range: start 0x0 length 0x4000 00:16:26.323 NVMe0n1 : 15.00 9137.88 35.69 240.80 0.00 13620.75 530.53 23717.79 00:16:26.323 =================================================================================================================== 00:16:26.323 Total : 9137.88 35.69 240.80 0.00 13620.75 530.53 23717.79 00:16:26.323 Received shutdown signal, test time was about 15.000000 seconds 00:16:26.323 00:16:26.323 Latency(us) 00:16:26.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.323 =================================================================================================================== 00:16:26.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87442 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87442 /var/tmp/bdevperf.sock 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87442 ']' 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.323 18:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:26.595 18:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.595 18:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:26.595 18:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:26.852 [2024-07-24 18:03:33.747825] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:26.852 18:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:27.111 [2024-07-24 18:03:33.976035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:27.111 18:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:27.369 NVMe0n1 00:16:27.369 18:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:27.627 00:16:27.886 18:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:28.144 00:16:28.144 18:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:28.145 18:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:28.145 18:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:28.403 18:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:31.704 18:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:31.704 18:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:31.704 18:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:31.704 18:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=87580 00:16:31.704 18:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 87580 00:16:33.154 0 00:16:33.154 18:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:33.154 [2024-07-24 18:03:32.556288] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:16:33.155 [2024-07-24 18:03:32.556428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87442 ] 00:16:33.155 [2024-07-24 18:03:32.698402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.155 [2024-07-24 18:03:32.817102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.155 [2024-07-24 18:03:35.289267] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:33.155 [2024-07-24 18:03:35.289382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.155 [2024-07-24 18:03:35.289403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.155 [2024-07-24 18:03:35.289422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.155 [2024-07-24 18:03:35.289436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.155 [2024-07-24 18:03:35.289451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.155 [2024-07-24 18:03:35.289466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.155 [2024-07-24 18:03:35.289481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.155 [2024-07-24 18:03:35.289496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.155 [2024-07-24 18:03:35.289511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:33.155 [2024-07-24 18:03:35.289547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:33.155 [2024-07-24 18:03:35.289571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400e30 (9): Bad file descriptor 00:16:33.155 [2024-07-24 18:03:35.294052] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:33.155 Running I/O for 1 seconds... 00:16:33.155 00:16:33.155 Latency(us) 00:16:33.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.155 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.155 Verification LBA range: start 0x0 length 0x4000 00:16:33.155 NVMe0n1 : 1.00 10605.10 41.43 0.00 0.00 12018.35 1763.23 13481.69 00:16:33.155 =================================================================================================================== 00:16:33.155 Total : 10605.10 41.43 0.00 0.00 12018.35 1763.23 13481.69 00:16:33.155 18:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:33.155 18:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:33.155 18:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:33.413 18:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:33.413 18:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:33.671 18:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:33.930 18:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:37.214 18:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:37.214 18:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 87442 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87442 ']' 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87442 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87442 00:16:37.214 killing process with pid 87442 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87442' 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87442 00:16:37.214 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87442 00:16:37.472 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:37.472 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.730 rmmod nvme_tcp 00:16:37.730 rmmod nvme_fabrics 00:16:37.730 rmmod nvme_keyring 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87068 ']' 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87068 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87068 ']' 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87068 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87068 00:16:37.730 killing process with pid 87068 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87068' 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87068 00:16:37.730 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87068 00:16:37.988 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:37.988 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:37.988 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:37.988 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.988 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:37.988 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.988 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.988 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.246 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:38.246 00:16:38.246 real 0m33.306s 00:16:38.246 user 2m8.468s 00:16:38.246 sys 0m5.700s 00:16:38.246 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.246 18:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:38.246 ************************************ 00:16:38.246 END TEST nvmf_failover 00:16:38.246 ************************************ 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.246 ************************************ 00:16:38.246 START TEST nvmf_host_discovery 00:16:38.246 ************************************ 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:38.246 * Looking for test storage... 00:16:38.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.246 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:38.247 Cannot find device "nvmf_tgt_br" 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.247 Cannot find device "nvmf_tgt_br2" 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:38.247 Cannot find device "nvmf_tgt_br" 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:38.247 Cannot find device "nvmf_tgt_br2" 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:38.247 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.505 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:38.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:38.763 00:16:38.763 --- 10.0.0.2 ping statistics --- 00:16:38.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.763 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:38.763 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.763 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:38.763 00:16:38.763 --- 10.0.0.3 ping statistics --- 00:16:38.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.763 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:38.763 00:16:38.763 --- 10.0.0.1 ping statistics --- 00:16:38.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.763 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=87885 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 87885 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 87885 ']' 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.763 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.763 [2024-07-24 18:03:45.612433] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:16:38.763 [2024-07-24 18:03:45.612508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.021 [2024-07-24 18:03:45.749506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.021 [2024-07-24 18:03:45.853381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.021 [2024-07-24 18:03:45.853430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.021 [2024-07-24 18:03:45.853441] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.021 [2024-07-24 18:03:45.853451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.021 [2024-07-24 18:03:45.853459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.021 [2024-07-24 18:03:45.853489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.647 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.647 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:39.647 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.647 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:39.647 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.905 [2024-07-24 18:03:46.656773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.905 [2024-07-24 18:03:46.664902] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.905 null0 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.905 null1 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=87935 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 87935 /tmp/host.sock 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:39.905 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 87935 ']' 00:16:39.906 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:39.906 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.906 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:39.906 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:39.906 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.906 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.906 [2024-07-24 18:03:46.742961] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:16:39.906 [2024-07-24 18:03:46.743048] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87935 ] 00:16:40.164 [2024-07-24 18:03:46.882896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.164 [2024-07-24 18:03:47.000526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.097 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.097 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.097 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.097 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:41.097 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.098 [2024-07-24 18:03:48.057294] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.098 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:16:41.366 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:16:41.950 [2024-07-24 18:03:48.729684] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:41.950 [2024-07-24 18:03:48.729715] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:41.950 [2024-07-24 18:03:48.729729] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:41.950 [2024-07-24 18:03:48.815923] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:41.950 [2024-07-24 18:03:48.873236] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:41.950 [2024-07-24 18:03:48.873287] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.516 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:42.517 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.777 [2024-07-24 18:03:49.586077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:42.777 [2024-07-24 18:03:49.586752] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:42.777 [2024-07-24 18:03:49.586784] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:42.777 [2024-07-24 18:03:49.672800] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.777 [2024-07-24 18:03:49.733136] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:42.777 [2024-07-24 18:03:49.733170] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:42.777 [2024-07-24 18:03:49.733178] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:16:42.777 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.153 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.153 [2024-07-24 18:03:50.847172] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:44.153 [2024-07-24 18:03:50.847211] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:44.153 [2024-07-24 18:03:50.850546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.153 [2024-07-24 18:03:50.850579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.154 [2024-07-24 18:03:50.850593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.154 [2024-07-24 18:03:50.850603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.154 [2024-07-24 18:03:50.850614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.154 [2024-07-24 18:03:50.850624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.154 [2024-07-24 18:03:50.850635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.154 [2024-07-24 18:03:50.850645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.154 [2024-07-24 18:03:50.850655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03c50 is same with the state(5) to be set 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:44.154 [2024-07-24 18:03:50.860506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf03c50 (9): Bad file descriptor 00:16:44.154 [2024-07-24 18:03:50.870547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.154 [2024-07-24 18:03:50.870739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.154 [2024-07-24 18:03:50.870776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf03c50 with addr=10.0.0.2, port=4420 00:16:44.154 [2024-07-24 18:03:50.870797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03c50 is same with the state(5) to be set 00:16:44.154 [2024-07-24 18:03:50.870828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf03c50 (9): Bad file descriptor 00:16:44.154 [2024-07-24 18:03:50.870852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:44.154 [2024-07-24 18:03:50.870869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:44.154 [2024-07-24 18:03:50.870888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:44.154 [2024-07-24 18:03:50.870914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.154 [2024-07-24 18:03:50.880642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.154 [2024-07-24 18:03:50.880772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.154 [2024-07-24 18:03:50.880799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf03c50 with addr=10.0.0.2, port=4420 00:16:44.154 [2024-07-24 18:03:50.880812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03c50 is same with the state(5) to be set 00:16:44.154 [2024-07-24 18:03:50.880830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf03c50 (9): Bad file descriptor 00:16:44.154 [2024-07-24 18:03:50.880846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:44.154 [2024-07-24 18:03:50.880857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:44.154 [2024-07-24 18:03:50.880868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:44.154 [2024-07-24 18:03:50.880883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.154 [2024-07-24 18:03:50.890722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.154 [2024-07-24 18:03:50.890904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.154 [2024-07-24 18:03:50.890935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf03c50 with addr=10.0.0.2, port=4420 00:16:44.154 [2024-07-24 18:03:50.890955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03c50 is same with the state(5) to be set 00:16:44.154 [2024-07-24 18:03:50.890983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf03c50 (9): Bad file descriptor 00:16:44.154 [2024-07-24 18:03:50.891007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:44.154 [2024-07-24 18:03:50.891022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:44.154 [2024-07-24 18:03:50.891040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:44.154 [2024-07-24 18:03:50.891062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.154 [2024-07-24 18:03:50.900847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.154 [2024-07-24 18:03:50.901004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.154 [2024-07-24 18:03:50.901034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf03c50 with addr=10.0.0.2, port=4420 00:16:44.154 [2024-07-24 18:03:50.901052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03c50 is same with the state(5) to be set 00:16:44.154 [2024-07-24 18:03:50.901077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf03c50 (9): Bad file descriptor 00:16:44.154 [2024-07-24 18:03:50.901098] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:44.154 [2024-07-24 18:03:50.901113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:44.154 [2024-07-24 18:03:50.901129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:44.154 [2024-07-24 18:03:50.901149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.154 [2024-07-24 18:03:50.910945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:44.154 [2024-07-24 18:03:50.911117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.154 [2024-07-24 18:03:50.911150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf03c50 with addr=10.0.0.2, port=4420 00:16:44.154 [2024-07-24 18:03:50.911169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03c50 is same with the state(5) to be set 00:16:44.154 [2024-07-24 18:03:50.911196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf03c50 (9): Bad file descriptor 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.154 [2024-07-24 18:03:50.911233] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:44.154 [2024-07-24 18:03:50.911265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:44.154 [2024-07-24 18:03:50.911284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:44.154 [2024-07-24 18:03:50.911307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.154 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:44.154 [2024-07-24 18:03:50.921037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.154 [2024-07-24 18:03:50.921210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.154 [2024-07-24 18:03:50.921259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf03c50 with addr=10.0.0.2, port=4420 00:16:44.154 [2024-07-24 18:03:50.921283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03c50 is same with the state(5) to be set 00:16:44.154 [2024-07-24 18:03:50.921314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf03c50 (9): Bad file descriptor 00:16:44.154 [2024-07-24 18:03:50.921339] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:44.154 [2024-07-24 18:03:50.921355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:44.154 [2024-07-24 18:03:50.921374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:44.154 [2024-07-24 18:03:50.921398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.154 [2024-07-24 18:03:50.931125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.154 [2024-07-24 18:03:50.931277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.154 [2024-07-24 18:03:50.931300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf03c50 with addr=10.0.0.2, port=4420 00:16:44.154 [2024-07-24 18:03:50.931314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03c50 is same with the state(5) to be set 00:16:44.155 [2024-07-24 18:03:50.931332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf03c50 (9): Bad file descriptor 00:16:44.155 [2024-07-24 18:03:50.931348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:44.155 [2024-07-24 18:03:50.931358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:44.155 [2024-07-24 18:03:50.931398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:44.155 [2024-07-24 18:03:50.931415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.155 [2024-07-24 18:03:50.933490] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:44.155 [2024-07-24 18:03:50.933518] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:44.155 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:44.155 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:44.413 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.414 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.348 [2024-07-24 18:03:52.260129] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:45.348 [2024-07-24 18:03:52.260167] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:45.348 [2024-07-24 18:03:52.260193] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:45.606 [2024-07-24 18:03:52.347264] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:45.606 [2024-07-24 18:03:52.407701] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:45.606 [2024-07-24 18:03:52.407769] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.606 2024/07/24 18:03:52 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:16:45.606 request: 00:16:45.606 { 00:16:45.606 "method": "bdev_nvme_start_discovery", 00:16:45.606 "params": { 00:16:45.606 "name": "nvme", 00:16:45.606 "trtype": "tcp", 00:16:45.606 "traddr": "10.0.0.2", 00:16:45.606 "adrfam": "ipv4", 00:16:45.606 "trsvcid": "8009", 00:16:45.606 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:45.606 "wait_for_attach": true 00:16:45.606 } 00:16:45.606 } 00:16:45.606 Got JSON-RPC error response 00:16:45.606 GoRPCClient: error on JSON-RPC call 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.606 2024/07/24 18:03:52 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:16:45.606 request: 00:16:45.606 { 00:16:45.606 "method": "bdev_nvme_start_discovery", 00:16:45.606 "params": { 00:16:45.606 "name": "nvme_second", 00:16:45.606 "trtype": "tcp", 00:16:45.606 "traddr": "10.0.0.2", 00:16:45.606 "adrfam": "ipv4", 00:16:45.606 "trsvcid": "8009", 00:16:45.606 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:45.606 "wait_for_attach": true 00:16:45.606 } 00:16:45.606 } 00:16:45.606 Got JSON-RPC error response 00:16:45.606 GoRPCClient: error on JSON-RPC call 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:45.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.864 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.797 [2024-07-24 18:03:53.652153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:46.797 [2024-07-24 18:03:53.652225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefd7b0 with addr=10.0.0.2, port=8010 00:16:46.797 [2024-07-24 18:03:53.652271] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:46.797 [2024-07-24 18:03:53.652290] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:46.797 [2024-07-24 18:03:53.652305] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:47.731 [2024-07-24 18:03:54.652147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.731 [2024-07-24 18:03:54.652204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefd7b0 with addr=10.0.0.2, port=8010 00:16:47.731 [2024-07-24 18:03:54.652227] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:47.731 [2024-07-24 18:03:54.652238] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:47.731 [2024-07-24 18:03:54.652260] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:49.107 [2024-07-24 18:03:55.652013] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:49.107 2024/07/24 18:03:55 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:16:49.107 request: 00:16:49.107 { 00:16:49.107 "method": "bdev_nvme_start_discovery", 00:16:49.107 "params": { 00:16:49.107 "name": "nvme_second", 00:16:49.107 "trtype": "tcp", 00:16:49.107 "traddr": "10.0.0.2", 00:16:49.107 "adrfam": "ipv4", 00:16:49.107 "trsvcid": "8010", 00:16:49.107 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:49.107 "wait_for_attach": false, 00:16:49.107 "attach_timeout_ms": 3000 00:16:49.107 } 00:16:49.107 } 00:16:49.107 Got JSON-RPC error response 00:16:49.107 GoRPCClient: error on JSON-RPC call 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 87935 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.107 rmmod nvme_tcp 00:16:49.107 rmmod nvme_fabrics 00:16:49.107 rmmod nvme_keyring 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 87885 ']' 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 87885 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 87885 ']' 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 87885 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87885 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:49.107 killing process with pid 87885 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87885' 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 87885 00:16:49.107 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 87885 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:49.107 ************************************ 00:16:49.107 END TEST nvmf_host_discovery 00:16:49.107 ************************************ 00:16:49.107 00:16:49.107 real 0m11.037s 00:16:49.107 user 0m21.202s 00:16:49.107 sys 0m2.044s 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.107 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.366 ************************************ 00:16:49.366 START TEST nvmf_host_multipath_status 00:16:49.366 ************************************ 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:49.366 * Looking for test storage... 00:16:49.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.366 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:49.367 Cannot find device "nvmf_tgt_br" 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.367 Cannot find device "nvmf_tgt_br2" 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:49.367 Cannot find device "nvmf_tgt_br" 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:49.367 Cannot find device "nvmf_tgt_br2" 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:49.367 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:49.625 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:49.625 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.625 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:49.625 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:49.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:49.626 00:16:49.626 --- 10.0.0.2 ping statistics --- 00:16:49.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.626 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:49.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:49.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:49.626 00:16:49.626 --- 10.0.0.3 ping statistics --- 00:16:49.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.626 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:49.626 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:49.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:49.884 00:16:49.884 --- 10.0.0.1 ping statistics --- 00:16:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.884 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:49.884 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.884 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:49.884 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:49.884 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=88408 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 88408 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 88408 ']' 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.885 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.885 [2024-07-24 18:03:56.692971] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:16:49.885 [2024-07-24 18:03:56.693068] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.885 [2024-07-24 18:03:56.836555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:50.143 [2024-07-24 18:03:56.975805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.143 [2024-07-24 18:03:56.975868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.143 [2024-07-24 18:03:56.975884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.143 [2024-07-24 18:03:56.975896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.143 [2024-07-24 18:03:56.975907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.143 [2024-07-24 18:03:56.976132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.143 [2024-07-24 18:03:56.976139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.710 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.710 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:50.710 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:50.710 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:50.710 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:50.710 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.710 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88408 00:16:50.710 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:51.277 [2024-07-24 18:03:57.947419] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.277 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:51.277 Malloc0 00:16:51.277 18:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:51.535 18:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.794 18:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.052 [2024-07-24 18:03:59.018693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.310 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:52.569 [2024-07-24 18:03:59.302872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:52.569 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:52.569 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=88516 00:16:52.569 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:52.569 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 88516 /var/tmp/bdevperf.sock 00:16:52.569 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 88516 ']' 00:16:52.569 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.570 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.570 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.570 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.570 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:53.505 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.505 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:53.505 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:53.763 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:54.020 Nvme0n1 00:16:54.020 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:54.586 Nvme0n1 00:16:54.586 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:54.586 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:56.487 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:56.487 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:56.746 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:57.004 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:57.938 18:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:57.938 18:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:57.938 18:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.938 18:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:58.196 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.196 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:58.196 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.196 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:58.454 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:58.454 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:58.454 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.454 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:58.712 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.712 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:58.712 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.712 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:58.972 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.972 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:58.972 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.972 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:59.231 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.231 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:59.231 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.231 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:59.489 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.489 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:59.489 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:59.748 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:00.017 18:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:00.981 18:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:00.981 18:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:00.981 18:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.981 18:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:01.549 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:01.549 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:01.549 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.549 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:01.549 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.549 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:01.549 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:01.549 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.808 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.808 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:01.808 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.808 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:02.094 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.094 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:02.094 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.094 18:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:02.356 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.356 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:02.356 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.356 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:02.615 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.615 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:02.615 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:02.874 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:03.132 18:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:04.066 18:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:04.066 18:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:04.066 18:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:04.066 18:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.324 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.324 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:04.324 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:04.324 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.583 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:04.583 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:04.583 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:04.583 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.842 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.842 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:05.100 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.100 18:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:05.100 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.100 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:05.100 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.100 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:05.668 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.668 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:05.668 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.668 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:05.668 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.668 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:05.668 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:06.235 18:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:06.493 18:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:07.428 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:07.428 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:07.428 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.428 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:07.689 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:07.689 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:07.689 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.689 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:07.996 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:07.996 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:07.996 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.996 18:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:08.254 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.254 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:08.254 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.254 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:08.822 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.822 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:08.822 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.822 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:08.822 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.822 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:08.822 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:08.822 18:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.081 18:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:09.081 18:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:09.081 18:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:09.339 18:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:09.932 18:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:10.867 18:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:10.867 18:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:10.867 18:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:10.867 18:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:11.125 18:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:11.125 18:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:11.125 18:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.125 18:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:11.383 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:11.383 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:11.383 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.383 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:11.641 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.641 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:11.641 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:11.641 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.934 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.934 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:11.934 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.934 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:12.194 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:12.194 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:12.194 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.194 18:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:12.453 18:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:12.453 18:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:12.453 18:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:12.712 18:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:12.970 18:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:13.902 18:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:13.902 18:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:13.902 18:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.902 18:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:14.160 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:14.160 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:14.160 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.160 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:14.418 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.418 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:14.418 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.418 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:14.676 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.676 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:14.676 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:14.676 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.934 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.934 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:14.934 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.934 18:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:15.192 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:15.192 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:15.192 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.192 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:15.450 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.450 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:15.708 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:15.708 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:15.966 18:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:16.547 18:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:17.483 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:17.483 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:17.483 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:17.483 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.741 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.741 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:17.741 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.741 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:17.998 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.998 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:17.999 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.999 18:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:18.596 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.596 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:18.596 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:18.596 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:18.596 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.596 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:18.855 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:18.855 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:19.113 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:19.113 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:19.113 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.113 18:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:19.372 18:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:19.372 18:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:19.372 18:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:19.372 18:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:19.629 18:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:21.003 18:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:21.003 18:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:21.003 18:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.003 18:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:21.003 18:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:21.003 18:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:21.003 18:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.003 18:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:21.261 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.261 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:21.261 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.261 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:21.519 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.519 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:21.519 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.519 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:21.778 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.778 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:21.778 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:21.778 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.036 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.036 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:22.036 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:22.036 18:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.295 18:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.295 18:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:22.295 18:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:22.554 18:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:22.812 18:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:23.748 18:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:23.748 18:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:23.748 18:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.748 18:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:24.005 18:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.005 18:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:24.005 18:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.005 18:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:24.263 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.263 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:24.263 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:24.263 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.826 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.826 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:24.826 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.826 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:24.826 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.826 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:24.826 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:24.826 18:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.392 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.392 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:25.392 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.392 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:25.392 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.392 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:25.392 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:25.958 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:25.958 18:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:27.330 18:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:27.330 18:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:27.330 18:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.330 18:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:27.330 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.330 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:27.330 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.330 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:27.587 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:27.587 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:27.587 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.587 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:27.845 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.845 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:27.845 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:27.845 18:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.104 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.104 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:28.104 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.104 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:28.361 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.361 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:28.361 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:28.361 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 88516 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 88516 ']' 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 88516 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88516 00:17:28.619 killing process with pid 88516 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88516' 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 88516 00:17:28.619 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 88516 00:17:28.880 Connection closed with partial response: 00:17:28.880 00:17:28.880 00:17:28.880 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 88516 00:17:28.880 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.880 [2024-07-24 18:03:59.377552] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:17:28.880 [2024-07-24 18:03:59.377675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88516 ] 00:17:28.880 [2024-07-24 18:03:59.522997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.880 [2024-07-24 18:03:59.641554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.880 Running I/O for 90 seconds... 00:17:28.880 [2024-07-24 18:04:16.278498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.278961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.278998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.279019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.279035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.279056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.279071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.279092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.880 [2024-07-24 18:04:16.279107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.880 [2024-07-24 18:04:16.279129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.279969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.279993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.881 [2024-07-24 18:04:16.280851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.881 [2024-07-24 18:04:16.280867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.280891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.280907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.280930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.280946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.280970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.280986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.281032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.281072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.281112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.281159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.281199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.281239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.281288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.281327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.281351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.281367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.282969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.282985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.882 [2024-07-24 18:04:16.283733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.283781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.283830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.283877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.882 [2024-07-24 18:04:16.283924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.882 [2024-07-24 18:04:16.283955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:16.283970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:16.284001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:16.284017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:16.284048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:16.284063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:16.284094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:16.284110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:16.284141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:16.284157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.846903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.846973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.847043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.847080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.847117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.847152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.883 [2024-07-24 18:04:32.847188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.883 [2024-07-24 18:04:32.847223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.883 [2024-07-24 18:04:32.847271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.883 [2024-07-24 18:04:32.847306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.847990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.883 [2024-07-24 18:04:32.848404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.883 [2024-07-24 18:04:32.848440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.883 [2024-07-24 18:04:32.848476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.883 [2024-07-24 18:04:32.848511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.883 [2024-07-24 18:04:32.848900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.883 [2024-07-24 18:04:32.848914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.848935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.848950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.848972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.884 [2024-07-24 18:04:32.848987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.884 [2024-07-24 18:04:32.851221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.884 [2024-07-24 18:04:32.851681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.884 [2024-07-24 18:04:32.851724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.884 [2024-07-24 18:04:32.851746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.884 [2024-07-24 18:04:32.851761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.884 Received shutdown signal, test time was about 34.173322 seconds 00:17:28.884 00:17:28.884 Latency(us) 00:17:28.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.884 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:28.884 Verification LBA range: start 0x0 length 0x4000 00:17:28.884 Nvme0n1 : 34.17 9099.59 35.55 0.00 0.00 14035.51 139.46 4026531.84 00:17:28.884 =================================================================================================================== 00:17:28.884 Total : 9099.59 35.55 0.00 0.00 14035.51 139.46 4026531.84 00:17:28.884 18:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.142 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:29.142 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:29.142 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:29.142 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:29.142 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:29.142 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.142 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:29.142 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.143 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.143 rmmod nvme_tcp 00:17:29.143 rmmod nvme_fabrics 00:17:29.143 rmmod nvme_keyring 00:17:29.143 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.143 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:29.143 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:29.143 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 88408 ']' 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 88408 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 88408 ']' 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 88408 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88408 00:17:29.400 killing process with pid 88408 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88408' 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 88408 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 88408 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.400 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:29.659 00:17:29.659 real 0m40.291s 00:17:29.659 user 2m9.122s 00:17:29.659 sys 0m12.100s 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:29.659 ************************************ 00:17:29.659 END TEST nvmf_host_multipath_status 00:17:29.659 ************************************ 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.659 ************************************ 00:17:29.659 START TEST nvmf_discovery_remove_ifc 00:17:29.659 ************************************ 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:29.659 * Looking for test storage... 00:17:29.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.659 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:29.660 Cannot find device "nvmf_tgt_br" 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.660 Cannot find device "nvmf_tgt_br2" 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:29.660 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:29.917 Cannot find device "nvmf_tgt_br" 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:29.917 Cannot find device "nvmf_tgt_br2" 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.917 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:30.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:17:30.174 00:17:30.174 --- 10.0.0.2 ping statistics --- 00:17:30.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.174 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:30.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:30.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:30.174 00:17:30.174 --- 10.0.0.3 ping statistics --- 00:17:30.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.174 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:30.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:30.174 00:17:30.174 --- 10.0.0.1 ping statistics --- 00:17:30.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.174 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=89837 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 89837 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 89837 ']' 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.174 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.175 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.175 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.175 18:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 [2024-07-24 18:04:37.002019] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:17:30.175 [2024-07-24 18:04:37.002110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.175 [2024-07-24 18:04:37.140590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.473 [2024-07-24 18:04:37.246994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.473 [2024-07-24 18:04:37.247039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.473 [2024-07-24 18:04:37.247049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.473 [2024-07-24 18:04:37.247058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.473 [2024-07-24 18:04:37.247065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.473 [2024-07-24 18:04:37.247098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.040 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.040 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:17:31.040 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.040 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:31.041 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.041 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.041 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:31.041 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.041 18:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.041 [2024-07-24 18:04:37.968235] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.041 [2024-07-24 18:04:37.976396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:31.041 null0 00:17:31.041 [2024-07-24 18:04:38.008349] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=89887 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 89887 /tmp/host.sock 00:17:31.298 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 89887 ']' 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.298 18:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.298 [2024-07-24 18:04:38.097891] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:17:31.298 [2024-07-24 18:04:38.098005] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89887 ] 00:17:31.298 [2024-07-24 18:04:38.239831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.555 [2024-07-24 18:04:38.357804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.120 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.418 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.418 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:32.418 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.418 18:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:33.351 [2024-07-24 18:04:40.161535] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:33.351 [2024-07-24 18:04:40.161575] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:33.351 [2024-07-24 18:04:40.161591] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:33.351 [2024-07-24 18:04:40.247675] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:33.351 [2024-07-24 18:04:40.305002] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:33.351 [2024-07-24 18:04:40.305087] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:33.351 [2024-07-24 18:04:40.305113] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:33.351 [2024-07-24 18:04:40.305133] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:33.351 [2024-07-24 18:04:40.305161] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:33.351 [2024-07-24 18:04:40.310202] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x908650 was disconnected and freed. delete nvme_qpair. 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.351 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:33.610 18:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:34.546 18:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:35.921 18:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:36.856 18:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:37.791 18:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:38.726 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:38.726 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:38.726 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:38.726 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:38.726 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:38.726 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.726 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:38.726 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.985 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:38.985 18:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:38.985 [2024-07-24 18:04:45.733402] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:38.985 [2024-07-24 18:04:45.733490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.985 [2024-07-24 18:04:45.733506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.985 [2024-07-24 18:04:45.733520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.985 [2024-07-24 18:04:45.733532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.985 [2024-07-24 18:04:45.733543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.985 [2024-07-24 18:04:45.733554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.985 [2024-07-24 18:04:45.733564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.985 [2024-07-24 18:04:45.733574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.985 [2024-07-24 18:04:45.733585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.985 [2024-07-24 18:04:45.733595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.985 [2024-07-24 18:04:45.733605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1900 is same with the state(5) to be set 00:17:38.985 [2024-07-24 18:04:45.743394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d1900 (9): Bad file descriptor 00:17:38.985 [2024-07-24 18:04:45.753419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:39.919 [2024-07-24 18:04:46.781345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:39.919 [2024-07-24 18:04:46.781456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d1900 with addr=10.0.0.2, port=4420 00:17:39.919 [2024-07-24 18:04:46.781488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1900 is same with the state(5) to be set 00:17:39.919 [2024-07-24 18:04:46.781549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d1900 (9): Bad file descriptor 00:17:39.919 [2024-07-24 18:04:46.782275] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:39.919 [2024-07-24 18:04:46.782335] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:39.919 [2024-07-24 18:04:46.782355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:39.919 [2024-07-24 18:04:46.782376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:39.919 [2024-07-24 18:04:46.782411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:39.919 [2024-07-24 18:04:46.782431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:39.919 18:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:40.856 [2024-07-24 18:04:47.782497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:40.856 [2024-07-24 18:04:47.782568] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:40.856 [2024-07-24 18:04:47.782580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:40.856 [2024-07-24 18:04:47.782594] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:40.856 [2024-07-24 18:04:47.782616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:40.856 [2024-07-24 18:04:47.782644] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:40.856 [2024-07-24 18:04:47.782704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.856 [2024-07-24 18:04:47.782718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.856 [2024-07-24 18:04:47.782733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.856 [2024-07-24 18:04:47.782743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.856 [2024-07-24 18:04:47.782755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.856 [2024-07-24 18:04:47.782765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.856 [2024-07-24 18:04:47.782776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.856 [2024-07-24 18:04:47.782786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.856 [2024-07-24 18:04:47.782797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.857 [2024-07-24 18:04:47.782807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.857 [2024-07-24 18:04:47.782818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:40.857 [2024-07-24 18:04:47.783332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8743e0 (9): Bad file descriptor 00:17:40.857 [2024-07-24 18:04:47.784349] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:40.857 [2024-07-24 18:04:47.784377] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:40.857 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:40.857 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.857 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:40.857 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.857 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:40.857 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:40.857 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:41.116 18:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:42.060 18:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:42.996 [2024-07-24 18:04:49.793496] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:42.996 [2024-07-24 18:04:49.793533] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:42.996 [2024-07-24 18:04:49.793547] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:42.996 [2024-07-24 18:04:49.879647] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:42.996 [2024-07-24 18:04:49.935711] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:42.996 [2024-07-24 18:04:49.935767] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:42.996 [2024-07-24 18:04:49.935788] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:42.996 [2024-07-24 18:04:49.935805] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:42.996 [2024-07-24 18:04:49.935814] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:42.996 [2024-07-24 18:04:49.942089] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8ed390 was disconnected and freed. delete nvme_qpair. 00:17:43.254 18:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:43.254 18:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:43.254 18:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:43.254 18:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.254 18:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:43.254 18:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:43.254 18:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 89887 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 89887 ']' 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 89887 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89887 00:17:43.254 killing process with pid 89887 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89887' 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 89887 00:17:43.254 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 89887 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.513 rmmod nvme_tcp 00:17:43.513 rmmod nvme_fabrics 00:17:43.513 rmmod nvme_keyring 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 89837 ']' 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 89837 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 89837 ']' 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 89837 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89837 00:17:43.513 killing process with pid 89837 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89837' 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 89837 00:17:43.513 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 89837 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:43.772 00:17:43.772 real 0m14.147s 00:17:43.772 user 0m24.802s 00:17:43.772 sys 0m2.142s 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:43.772 ************************************ 00:17:43.772 END TEST nvmf_discovery_remove_ifc 00:17:43.772 ************************************ 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.772 ************************************ 00:17:43.772 START TEST nvmf_identify_kernel_target 00:17:43.772 ************************************ 00:17:43.772 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:43.772 * Looking for test storage... 00:17:44.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:44.036 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:44.037 Cannot find device "nvmf_tgt_br" 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.037 Cannot find device "nvmf_tgt_br2" 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:44.037 Cannot find device "nvmf_tgt_br" 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:44.037 Cannot find device "nvmf_tgt_br2" 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.037 18:04:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:44.037 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:44.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:17:44.296 00:17:44.296 --- 10.0.0.2 ping statistics --- 00:17:44.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.296 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:44.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:44.296 00:17:44.296 --- 10.0.0.3 ping statistics --- 00:17:44.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.296 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:17:44.296 00:17:44.296 --- 10.0.0.1 ping statistics --- 00:17:44.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.296 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:44.296 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:44.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:44.861 Waiting for block devices as requested 00:17:44.861 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:44.861 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:45.119 No valid GPT data, bailing 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:45.119 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:45.120 No valid GPT data, bailing 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:45.120 18:04:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:45.120 No valid GPT data, bailing 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:45.120 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:45.378 No valid GPT data, bailing 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -a 10.0.0.1 -t tcp -s 4420 00:17:45.378 00:17:45.378 Discovery Log Number of Records 2, Generation counter 2 00:17:45.378 =====Discovery Log Entry 0====== 00:17:45.378 trtype: tcp 00:17:45.378 adrfam: ipv4 00:17:45.378 subtype: current discovery subsystem 00:17:45.378 treq: not specified, sq flow control disable supported 00:17:45.378 portid: 1 00:17:45.378 trsvcid: 4420 00:17:45.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:45.378 traddr: 10.0.0.1 00:17:45.378 eflags: none 00:17:45.378 sectype: none 00:17:45.378 =====Discovery Log Entry 1====== 00:17:45.378 trtype: tcp 00:17:45.378 adrfam: ipv4 00:17:45.378 subtype: nvme subsystem 00:17:45.378 treq: not specified, sq flow control disable supported 00:17:45.378 portid: 1 00:17:45.378 trsvcid: 4420 00:17:45.378 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:45.378 traddr: 10.0.0.1 00:17:45.378 eflags: none 00:17:45.378 sectype: none 00:17:45.378 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:45.378 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:45.661 ===================================================== 00:17:45.661 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:45.661 ===================================================== 00:17:45.661 Controller Capabilities/Features 00:17:45.661 ================================ 00:17:45.661 Vendor ID: 0000 00:17:45.661 Subsystem Vendor ID: 0000 00:17:45.661 Serial Number: ee25db3de7a90724af42 00:17:45.661 Model Number: Linux 00:17:45.661 Firmware Version: 6.7.0-68 00:17:45.661 Recommended Arb Burst: 0 00:17:45.661 IEEE OUI Identifier: 00 00 00 00:17:45.661 Multi-path I/O 00:17:45.661 May have multiple subsystem ports: No 00:17:45.661 May have multiple controllers: No 00:17:45.661 Associated with SR-IOV VF: No 00:17:45.661 Max Data Transfer Size: Unlimited 00:17:45.661 Max Number of Namespaces: 0 00:17:45.661 Max Number of I/O Queues: 1024 00:17:45.661 NVMe Specification Version (VS): 1.3 00:17:45.661 NVMe Specification Version (Identify): 1.3 00:17:45.661 Maximum Queue Entries: 1024 00:17:45.661 Contiguous Queues Required: No 00:17:45.661 Arbitration Mechanisms Supported 00:17:45.661 Weighted Round Robin: Not Supported 00:17:45.661 Vendor Specific: Not Supported 00:17:45.661 Reset Timeout: 7500 ms 00:17:45.661 Doorbell Stride: 4 bytes 00:17:45.661 NVM Subsystem Reset: Not Supported 00:17:45.661 Command Sets Supported 00:17:45.661 NVM Command Set: Supported 00:17:45.661 Boot Partition: Not Supported 00:17:45.661 Memory Page Size Minimum: 4096 bytes 00:17:45.661 Memory Page Size Maximum: 4096 bytes 00:17:45.661 Persistent Memory Region: Not Supported 00:17:45.661 Optional Asynchronous Events Supported 00:17:45.661 Namespace Attribute Notices: Not Supported 00:17:45.661 Firmware Activation Notices: Not Supported 00:17:45.661 ANA Change Notices: Not Supported 00:17:45.661 PLE Aggregate Log Change Notices: Not Supported 00:17:45.661 LBA Status Info Alert Notices: Not Supported 00:17:45.661 EGE Aggregate Log Change Notices: Not Supported 00:17:45.661 Normal NVM Subsystem Shutdown event: Not Supported 00:17:45.661 Zone Descriptor Change Notices: Not Supported 00:17:45.661 Discovery Log Change Notices: Supported 00:17:45.661 Controller Attributes 00:17:45.661 128-bit Host Identifier: Not Supported 00:17:45.661 Non-Operational Permissive Mode: Not Supported 00:17:45.661 NVM Sets: Not Supported 00:17:45.661 Read Recovery Levels: Not Supported 00:17:45.661 Endurance Groups: Not Supported 00:17:45.661 Predictable Latency Mode: Not Supported 00:17:45.661 Traffic Based Keep ALive: Not Supported 00:17:45.661 Namespace Granularity: Not Supported 00:17:45.661 SQ Associations: Not Supported 00:17:45.661 UUID List: Not Supported 00:17:45.661 Multi-Domain Subsystem: Not Supported 00:17:45.661 Fixed Capacity Management: Not Supported 00:17:45.661 Variable Capacity Management: Not Supported 00:17:45.661 Delete Endurance Group: Not Supported 00:17:45.662 Delete NVM Set: Not Supported 00:17:45.662 Extended LBA Formats Supported: Not Supported 00:17:45.662 Flexible Data Placement Supported: Not Supported 00:17:45.662 00:17:45.662 Controller Memory Buffer Support 00:17:45.662 ================================ 00:17:45.662 Supported: No 00:17:45.662 00:17:45.662 Persistent Memory Region Support 00:17:45.662 ================================ 00:17:45.662 Supported: No 00:17:45.662 00:17:45.662 Admin Command Set Attributes 00:17:45.662 ============================ 00:17:45.662 Security Send/Receive: Not Supported 00:17:45.662 Format NVM: Not Supported 00:17:45.662 Firmware Activate/Download: Not Supported 00:17:45.662 Namespace Management: Not Supported 00:17:45.662 Device Self-Test: Not Supported 00:17:45.662 Directives: Not Supported 00:17:45.662 NVMe-MI: Not Supported 00:17:45.662 Virtualization Management: Not Supported 00:17:45.662 Doorbell Buffer Config: Not Supported 00:17:45.662 Get LBA Status Capability: Not Supported 00:17:45.662 Command & Feature Lockdown Capability: Not Supported 00:17:45.662 Abort Command Limit: 1 00:17:45.662 Async Event Request Limit: 1 00:17:45.662 Number of Firmware Slots: N/A 00:17:45.662 Firmware Slot 1 Read-Only: N/A 00:17:45.662 Firmware Activation Without Reset: N/A 00:17:45.662 Multiple Update Detection Support: N/A 00:17:45.662 Firmware Update Granularity: No Information Provided 00:17:45.662 Per-Namespace SMART Log: No 00:17:45.662 Asymmetric Namespace Access Log Page: Not Supported 00:17:45.662 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:45.662 Command Effects Log Page: Not Supported 00:17:45.662 Get Log Page Extended Data: Supported 00:17:45.662 Telemetry Log Pages: Not Supported 00:17:45.662 Persistent Event Log Pages: Not Supported 00:17:45.662 Supported Log Pages Log Page: May Support 00:17:45.662 Commands Supported & Effects Log Page: Not Supported 00:17:45.662 Feature Identifiers & Effects Log Page:May Support 00:17:45.662 NVMe-MI Commands & Effects Log Page: May Support 00:17:45.662 Data Area 4 for Telemetry Log: Not Supported 00:17:45.662 Error Log Page Entries Supported: 1 00:17:45.662 Keep Alive: Not Supported 00:17:45.662 00:17:45.662 NVM Command Set Attributes 00:17:45.662 ========================== 00:17:45.662 Submission Queue Entry Size 00:17:45.662 Max: 1 00:17:45.662 Min: 1 00:17:45.662 Completion Queue Entry Size 00:17:45.662 Max: 1 00:17:45.662 Min: 1 00:17:45.662 Number of Namespaces: 0 00:17:45.662 Compare Command: Not Supported 00:17:45.662 Write Uncorrectable Command: Not Supported 00:17:45.662 Dataset Management Command: Not Supported 00:17:45.662 Write Zeroes Command: Not Supported 00:17:45.662 Set Features Save Field: Not Supported 00:17:45.662 Reservations: Not Supported 00:17:45.662 Timestamp: Not Supported 00:17:45.662 Copy: Not Supported 00:17:45.662 Volatile Write Cache: Not Present 00:17:45.662 Atomic Write Unit (Normal): 1 00:17:45.662 Atomic Write Unit (PFail): 1 00:17:45.662 Atomic Compare & Write Unit: 1 00:17:45.662 Fused Compare & Write: Not Supported 00:17:45.662 Scatter-Gather List 00:17:45.662 SGL Command Set: Supported 00:17:45.662 SGL Keyed: Not Supported 00:17:45.662 SGL Bit Bucket Descriptor: Not Supported 00:17:45.662 SGL Metadata Pointer: Not Supported 00:17:45.662 Oversized SGL: Not Supported 00:17:45.662 SGL Metadata Address: Not Supported 00:17:45.662 SGL Offset: Supported 00:17:45.662 Transport SGL Data Block: Not Supported 00:17:45.662 Replay Protected Memory Block: Not Supported 00:17:45.662 00:17:45.662 Firmware Slot Information 00:17:45.662 ========================= 00:17:45.662 Active slot: 0 00:17:45.662 00:17:45.662 00:17:45.662 Error Log 00:17:45.662 ========= 00:17:45.662 00:17:45.662 Active Namespaces 00:17:45.662 ================= 00:17:45.662 Discovery Log Page 00:17:45.662 ================== 00:17:45.662 Generation Counter: 2 00:17:45.662 Number of Records: 2 00:17:45.662 Record Format: 0 00:17:45.662 00:17:45.662 Discovery Log Entry 0 00:17:45.662 ---------------------- 00:17:45.662 Transport Type: 3 (TCP) 00:17:45.662 Address Family: 1 (IPv4) 00:17:45.662 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:45.662 Entry Flags: 00:17:45.662 Duplicate Returned Information: 0 00:17:45.662 Explicit Persistent Connection Support for Discovery: 0 00:17:45.662 Transport Requirements: 00:17:45.662 Secure Channel: Not Specified 00:17:45.662 Port ID: 1 (0x0001) 00:17:45.662 Controller ID: 65535 (0xffff) 00:17:45.662 Admin Max SQ Size: 32 00:17:45.662 Transport Service Identifier: 4420 00:17:45.662 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:45.662 Transport Address: 10.0.0.1 00:17:45.662 Discovery Log Entry 1 00:17:45.662 ---------------------- 00:17:45.662 Transport Type: 3 (TCP) 00:17:45.662 Address Family: 1 (IPv4) 00:17:45.662 Subsystem Type: 2 (NVM Subsystem) 00:17:45.662 Entry Flags: 00:17:45.662 Duplicate Returned Information: 0 00:17:45.662 Explicit Persistent Connection Support for Discovery: 0 00:17:45.662 Transport Requirements: 00:17:45.662 Secure Channel: Not Specified 00:17:45.662 Port ID: 1 (0x0001) 00:17:45.662 Controller ID: 65535 (0xffff) 00:17:45.662 Admin Max SQ Size: 32 00:17:45.662 Transport Service Identifier: 4420 00:17:45.662 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:45.662 Transport Address: 10.0.0.1 00:17:45.662 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:45.662 get_feature(0x01) failed 00:17:45.662 get_feature(0x02) failed 00:17:45.662 get_feature(0x04) failed 00:17:45.662 ===================================================== 00:17:45.662 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:45.662 ===================================================== 00:17:45.662 Controller Capabilities/Features 00:17:45.662 ================================ 00:17:45.662 Vendor ID: 0000 00:17:45.662 Subsystem Vendor ID: 0000 00:17:45.662 Serial Number: ce822c02d4f92ff18844 00:17:45.662 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:45.662 Firmware Version: 6.7.0-68 00:17:45.662 Recommended Arb Burst: 6 00:17:45.662 IEEE OUI Identifier: 00 00 00 00:17:45.662 Multi-path I/O 00:17:45.662 May have multiple subsystem ports: Yes 00:17:45.662 May have multiple controllers: Yes 00:17:45.662 Associated with SR-IOV VF: No 00:17:45.662 Max Data Transfer Size: Unlimited 00:17:45.662 Max Number of Namespaces: 1024 00:17:45.662 Max Number of I/O Queues: 128 00:17:45.662 NVMe Specification Version (VS): 1.3 00:17:45.662 NVMe Specification Version (Identify): 1.3 00:17:45.662 Maximum Queue Entries: 1024 00:17:45.662 Contiguous Queues Required: No 00:17:45.662 Arbitration Mechanisms Supported 00:17:45.662 Weighted Round Robin: Not Supported 00:17:45.662 Vendor Specific: Not Supported 00:17:45.662 Reset Timeout: 7500 ms 00:17:45.662 Doorbell Stride: 4 bytes 00:17:45.662 NVM Subsystem Reset: Not Supported 00:17:45.662 Command Sets Supported 00:17:45.662 NVM Command Set: Supported 00:17:45.662 Boot Partition: Not Supported 00:17:45.662 Memory Page Size Minimum: 4096 bytes 00:17:45.662 Memory Page Size Maximum: 4096 bytes 00:17:45.662 Persistent Memory Region: Not Supported 00:17:45.662 Optional Asynchronous Events Supported 00:17:45.662 Namespace Attribute Notices: Supported 00:17:45.662 Firmware Activation Notices: Not Supported 00:17:45.662 ANA Change Notices: Supported 00:17:45.662 PLE Aggregate Log Change Notices: Not Supported 00:17:45.662 LBA Status Info Alert Notices: Not Supported 00:17:45.662 EGE Aggregate Log Change Notices: Not Supported 00:17:45.662 Normal NVM Subsystem Shutdown event: Not Supported 00:17:45.662 Zone Descriptor Change Notices: Not Supported 00:17:45.662 Discovery Log Change Notices: Not Supported 00:17:45.662 Controller Attributes 00:17:45.662 128-bit Host Identifier: Supported 00:17:45.662 Non-Operational Permissive Mode: Not Supported 00:17:45.662 NVM Sets: Not Supported 00:17:45.662 Read Recovery Levels: Not Supported 00:17:45.662 Endurance Groups: Not Supported 00:17:45.662 Predictable Latency Mode: Not Supported 00:17:45.662 Traffic Based Keep ALive: Supported 00:17:45.662 Namespace Granularity: Not Supported 00:17:45.662 SQ Associations: Not Supported 00:17:45.662 UUID List: Not Supported 00:17:45.662 Multi-Domain Subsystem: Not Supported 00:17:45.662 Fixed Capacity Management: Not Supported 00:17:45.662 Variable Capacity Management: Not Supported 00:17:45.662 Delete Endurance Group: Not Supported 00:17:45.662 Delete NVM Set: Not Supported 00:17:45.662 Extended LBA Formats Supported: Not Supported 00:17:45.662 Flexible Data Placement Supported: Not Supported 00:17:45.662 00:17:45.662 Controller Memory Buffer Support 00:17:45.663 ================================ 00:17:45.663 Supported: No 00:17:45.663 00:17:45.663 Persistent Memory Region Support 00:17:45.663 ================================ 00:17:45.663 Supported: No 00:17:45.663 00:17:45.663 Admin Command Set Attributes 00:17:45.663 ============================ 00:17:45.663 Security Send/Receive: Not Supported 00:17:45.663 Format NVM: Not Supported 00:17:45.663 Firmware Activate/Download: Not Supported 00:17:45.663 Namespace Management: Not Supported 00:17:45.663 Device Self-Test: Not Supported 00:17:45.663 Directives: Not Supported 00:17:45.663 NVMe-MI: Not Supported 00:17:45.663 Virtualization Management: Not Supported 00:17:45.663 Doorbell Buffer Config: Not Supported 00:17:45.663 Get LBA Status Capability: Not Supported 00:17:45.663 Command & Feature Lockdown Capability: Not Supported 00:17:45.663 Abort Command Limit: 4 00:17:45.663 Async Event Request Limit: 4 00:17:45.663 Number of Firmware Slots: N/A 00:17:45.663 Firmware Slot 1 Read-Only: N/A 00:17:45.663 Firmware Activation Without Reset: N/A 00:17:45.663 Multiple Update Detection Support: N/A 00:17:45.663 Firmware Update Granularity: No Information Provided 00:17:45.663 Per-Namespace SMART Log: Yes 00:17:45.663 Asymmetric Namespace Access Log Page: Supported 00:17:45.663 ANA Transition Time : 10 sec 00:17:45.663 00:17:45.663 Asymmetric Namespace Access Capabilities 00:17:45.663 ANA Optimized State : Supported 00:17:45.663 ANA Non-Optimized State : Supported 00:17:45.663 ANA Inaccessible State : Supported 00:17:45.663 ANA Persistent Loss State : Supported 00:17:45.663 ANA Change State : Supported 00:17:45.663 ANAGRPID is not changed : No 00:17:45.663 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:45.663 00:17:45.663 ANA Group Identifier Maximum : 128 00:17:45.663 Number of ANA Group Identifiers : 128 00:17:45.663 Max Number of Allowed Namespaces : 1024 00:17:45.663 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:45.663 Command Effects Log Page: Supported 00:17:45.663 Get Log Page Extended Data: Supported 00:17:45.663 Telemetry Log Pages: Not Supported 00:17:45.663 Persistent Event Log Pages: Not Supported 00:17:45.663 Supported Log Pages Log Page: May Support 00:17:45.663 Commands Supported & Effects Log Page: Not Supported 00:17:45.663 Feature Identifiers & Effects Log Page:May Support 00:17:45.663 NVMe-MI Commands & Effects Log Page: May Support 00:17:45.663 Data Area 4 for Telemetry Log: Not Supported 00:17:45.663 Error Log Page Entries Supported: 128 00:17:45.663 Keep Alive: Supported 00:17:45.663 Keep Alive Granularity: 1000 ms 00:17:45.663 00:17:45.663 NVM Command Set Attributes 00:17:45.663 ========================== 00:17:45.663 Submission Queue Entry Size 00:17:45.663 Max: 64 00:17:45.663 Min: 64 00:17:45.663 Completion Queue Entry Size 00:17:45.663 Max: 16 00:17:45.663 Min: 16 00:17:45.663 Number of Namespaces: 1024 00:17:45.663 Compare Command: Not Supported 00:17:45.663 Write Uncorrectable Command: Not Supported 00:17:45.663 Dataset Management Command: Supported 00:17:45.663 Write Zeroes Command: Supported 00:17:45.663 Set Features Save Field: Not Supported 00:17:45.663 Reservations: Not Supported 00:17:45.663 Timestamp: Not Supported 00:17:45.663 Copy: Not Supported 00:17:45.663 Volatile Write Cache: Present 00:17:45.663 Atomic Write Unit (Normal): 1 00:17:45.663 Atomic Write Unit (PFail): 1 00:17:45.663 Atomic Compare & Write Unit: 1 00:17:45.663 Fused Compare & Write: Not Supported 00:17:45.663 Scatter-Gather List 00:17:45.663 SGL Command Set: Supported 00:17:45.663 SGL Keyed: Not Supported 00:17:45.663 SGL Bit Bucket Descriptor: Not Supported 00:17:45.663 SGL Metadata Pointer: Not Supported 00:17:45.663 Oversized SGL: Not Supported 00:17:45.663 SGL Metadata Address: Not Supported 00:17:45.663 SGL Offset: Supported 00:17:45.663 Transport SGL Data Block: Not Supported 00:17:45.663 Replay Protected Memory Block: Not Supported 00:17:45.663 00:17:45.663 Firmware Slot Information 00:17:45.663 ========================= 00:17:45.663 Active slot: 0 00:17:45.663 00:17:45.663 Asymmetric Namespace Access 00:17:45.663 =========================== 00:17:45.663 Change Count : 0 00:17:45.663 Number of ANA Group Descriptors : 1 00:17:45.663 ANA Group Descriptor : 0 00:17:45.663 ANA Group ID : 1 00:17:45.663 Number of NSID Values : 1 00:17:45.663 Change Count : 0 00:17:45.663 ANA State : 1 00:17:45.663 Namespace Identifier : 1 00:17:45.663 00:17:45.663 Commands Supported and Effects 00:17:45.663 ============================== 00:17:45.663 Admin Commands 00:17:45.663 -------------- 00:17:45.663 Get Log Page (02h): Supported 00:17:45.663 Identify (06h): Supported 00:17:45.663 Abort (08h): Supported 00:17:45.663 Set Features (09h): Supported 00:17:45.663 Get Features (0Ah): Supported 00:17:45.663 Asynchronous Event Request (0Ch): Supported 00:17:45.663 Keep Alive (18h): Supported 00:17:45.663 I/O Commands 00:17:45.663 ------------ 00:17:45.663 Flush (00h): Supported 00:17:45.663 Write (01h): Supported LBA-Change 00:17:45.663 Read (02h): Supported 00:17:45.663 Write Zeroes (08h): Supported LBA-Change 00:17:45.663 Dataset Management (09h): Supported 00:17:45.663 00:17:45.663 Error Log 00:17:45.663 ========= 00:17:45.663 Entry: 0 00:17:45.663 Error Count: 0x3 00:17:45.663 Submission Queue Id: 0x0 00:17:45.663 Command Id: 0x5 00:17:45.663 Phase Bit: 0 00:17:45.663 Status Code: 0x2 00:17:45.663 Status Code Type: 0x0 00:17:45.663 Do Not Retry: 1 00:17:45.663 Error Location: 0x28 00:17:45.663 LBA: 0x0 00:17:45.663 Namespace: 0x0 00:17:45.663 Vendor Log Page: 0x0 00:17:45.663 ----------- 00:17:45.663 Entry: 1 00:17:45.663 Error Count: 0x2 00:17:45.663 Submission Queue Id: 0x0 00:17:45.663 Command Id: 0x5 00:17:45.663 Phase Bit: 0 00:17:45.663 Status Code: 0x2 00:17:45.663 Status Code Type: 0x0 00:17:45.663 Do Not Retry: 1 00:17:45.663 Error Location: 0x28 00:17:45.663 LBA: 0x0 00:17:45.663 Namespace: 0x0 00:17:45.663 Vendor Log Page: 0x0 00:17:45.663 ----------- 00:17:45.663 Entry: 2 00:17:45.663 Error Count: 0x1 00:17:45.663 Submission Queue Id: 0x0 00:17:45.663 Command Id: 0x4 00:17:45.663 Phase Bit: 0 00:17:45.663 Status Code: 0x2 00:17:45.663 Status Code Type: 0x0 00:17:45.663 Do Not Retry: 1 00:17:45.663 Error Location: 0x28 00:17:45.663 LBA: 0x0 00:17:45.663 Namespace: 0x0 00:17:45.663 Vendor Log Page: 0x0 00:17:45.663 00:17:45.663 Number of Queues 00:17:45.663 ================ 00:17:45.663 Number of I/O Submission Queues: 128 00:17:45.663 Number of I/O Completion Queues: 128 00:17:45.663 00:17:45.663 ZNS Specific Controller Data 00:17:45.663 ============================ 00:17:45.663 Zone Append Size Limit: 0 00:17:45.663 00:17:45.663 00:17:45.663 Active Namespaces 00:17:45.663 ================= 00:17:45.663 get_feature(0x05) failed 00:17:45.663 Namespace ID:1 00:17:45.663 Command Set Identifier: NVM (00h) 00:17:45.663 Deallocate: Supported 00:17:45.663 Deallocated/Unwritten Error: Not Supported 00:17:45.663 Deallocated Read Value: Unknown 00:17:45.663 Deallocate in Write Zeroes: Not Supported 00:17:45.663 Deallocated Guard Field: 0xFFFF 00:17:45.663 Flush: Supported 00:17:45.663 Reservation: Not Supported 00:17:45.663 Namespace Sharing Capabilities: Multiple Controllers 00:17:45.663 Size (in LBAs): 1310720 (5GiB) 00:17:45.663 Capacity (in LBAs): 1310720 (5GiB) 00:17:45.663 Utilization (in LBAs): 1310720 (5GiB) 00:17:45.663 UUID: 16d2691d-6f1c-4ca7-8788-8c962660a03e 00:17:45.663 Thin Provisioning: Not Supported 00:17:45.663 Per-NS Atomic Units: Yes 00:17:45.663 Atomic Boundary Size (Normal): 0 00:17:45.663 Atomic Boundary Size (PFail): 0 00:17:45.663 Atomic Boundary Offset: 0 00:17:45.663 NGUID/EUI64 Never Reused: No 00:17:45.663 ANA group ID: 1 00:17:45.663 Namespace Write Protected: No 00:17:45.663 Number of LBA Formats: 1 00:17:45.663 Current LBA Format: LBA Format #00 00:17:45.664 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:45.664 00:17:45.664 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:45.664 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.664 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:45.664 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.664 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:45.664 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.664 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.924 rmmod nvme_tcp 00:17:45.924 rmmod nvme_fabrics 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:45.924 18:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:46.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:46.748 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:46.748 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:46.748 00:17:46.748 real 0m3.031s 00:17:46.748 user 0m1.036s 00:17:46.748 sys 0m1.554s 00:17:46.748 18:04:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.748 ************************************ 00:17:46.748 END TEST nvmf_identify_kernel_target 00:17:46.748 ************************************ 00:17:46.748 18:04:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 ************************************ 00:17:47.006 START TEST nvmf_auth_host 00:17:47.006 ************************************ 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:47.006 * Looking for test storage... 00:17:47.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.006 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:47.007 Cannot find device "nvmf_tgt_br" 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:47.007 Cannot find device "nvmf_tgt_br2" 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:47.007 Cannot find device "nvmf_tgt_br" 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:47.007 Cannot find device "nvmf_tgt_br2" 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:47.007 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:47.266 18:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.266 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.267 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.267 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.267 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.267 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:47.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:17:47.267 00:17:47.267 --- 10.0.0.2 ping statistics --- 00:17:47.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.267 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:47.267 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:47.267 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.267 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:47.267 00:17:47.267 --- 10.0.0.3 ping statistics --- 00:17:47.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.267 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:47.267 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:17:47.267 00:17:47.267 --- 10.0.0.1 ping statistics --- 00:17:47.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.267 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:47.267 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=90789 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 90789 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 90789 ']' 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.526 18:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=351a757bd1d9029f150eab93e295440f 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NXP 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 351a757bd1d9029f150eab93e295440f 0 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 351a757bd1d9029f150eab93e295440f 0 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=351a757bd1d9029f150eab93e295440f 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NXP 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NXP 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.NXP 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4c720db31e37d6e0f6e8e07f4ea5edd34b3f1c29ac9aa5b340b816491eea335a 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GZJ 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4c720db31e37d6e0f6e8e07f4ea5edd34b3f1c29ac9aa5b340b816491eea335a 3 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4c720db31e37d6e0f6e8e07f4ea5edd34b3f1c29ac9aa5b340b816491eea335a 3 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4c720db31e37d6e0f6e8e07f4ea5edd34b3f1c29ac9aa5b340b816491eea335a 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GZJ 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GZJ 00:17:48.460 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.GZJ 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=72ca1b0cf822c1f0f0751c612232627e45570d236bfa9416 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2GI 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 72ca1b0cf822c1f0f0751c612232627e45570d236bfa9416 0 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 72ca1b0cf822c1f0f0751c612232627e45570d236bfa9416 0 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=72ca1b0cf822c1f0f0751c612232627e45570d236bfa9416 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2GI 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2GI 00:17:48.461 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2GI 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2448791aa28e74705a4f50a67a5952ded99132ea0e7f5402 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7Yz 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2448791aa28e74705a4f50a67a5952ded99132ea0e7f5402 2 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2448791aa28e74705a4f50a67a5952ded99132ea0e7f5402 2 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2448791aa28e74705a4f50a67a5952ded99132ea0e7f5402 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7Yz 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7Yz 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7Yz 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2a69a91405670161a8ca62173480fcec 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oRM 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2a69a91405670161a8ca62173480fcec 1 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2a69a91405670161a8ca62173480fcec 1 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.719 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2a69a91405670161a8ca62173480fcec 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oRM 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oRM 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.oRM 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c680add23c40538a87fa7490dd4be5dd 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FEx 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c680add23c40538a87fa7490dd4be5dd 1 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c680add23c40538a87fa7490dd4be5dd 1 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c680add23c40538a87fa7490dd4be5dd 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FEx 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FEx 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FEx 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57eb24487fe6eb43612832ef000ae6580c40c753d37121b0 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Dck 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57eb24487fe6eb43612832ef000ae6580c40c753d37121b0 2 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57eb24487fe6eb43612832ef000ae6580c40c753d37121b0 2 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57eb24487fe6eb43612832ef000ae6580c40c753d37121b0 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:48.720 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Dck 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Dck 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Dck 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1a7e55e636def15d7bbeda18de83b58f 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kUs 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1a7e55e636def15d7bbeda18de83b58f 0 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1a7e55e636def15d7bbeda18de83b58f 0 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1a7e55e636def15d7bbeda18de83b58f 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kUs 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kUs 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kUs 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b8ef1df6cdfa9559f1ab49e33a6d1ab31aa5c2a8498acfd46431499373e97885 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.THb 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b8ef1df6cdfa9559f1ab49e33a6d1ab31aa5c2a8498acfd46431499373e97885 3 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b8ef1df6cdfa9559f1ab49e33a6d1ab31aa5c2a8498acfd46431499373e97885 3 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b8ef1df6cdfa9559f1ab49e33a6d1ab31aa5c2a8498acfd46431499373e97885 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.THb 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.THb 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.THb 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 90789 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 90789 ']' 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.979 18:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NXP 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.GZJ ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GZJ 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2GI 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7Yz ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Yz 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.oRM 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FEx ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FEx 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Dck 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kUs ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kUs 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.THb 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:49.272 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:49.532 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:49.532 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:49.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:49.792 Waiting for block devices as requested 00:17:49.792 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:50.052 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:50.619 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:50.877 No valid GPT data, bailing 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:50.877 No valid GPT data, bailing 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:50.877 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:50.878 No valid GPT data, bailing 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:50.878 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:51.167 No valid GPT data, bailing 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:51.167 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -a 10.0.0.1 -t tcp -s 4420 00:17:51.168 00:17:51.168 Discovery Log Number of Records 2, Generation counter 2 00:17:51.168 =====Discovery Log Entry 0====== 00:17:51.168 trtype: tcp 00:17:51.168 adrfam: ipv4 00:17:51.168 subtype: current discovery subsystem 00:17:51.168 treq: not specified, sq flow control disable supported 00:17:51.168 portid: 1 00:17:51.168 trsvcid: 4420 00:17:51.168 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:51.168 traddr: 10.0.0.1 00:17:51.168 eflags: none 00:17:51.168 sectype: none 00:17:51.168 =====Discovery Log Entry 1====== 00:17:51.168 trtype: tcp 00:17:51.168 adrfam: ipv4 00:17:51.168 subtype: nvme subsystem 00:17:51.168 treq: not specified, sq flow control disable supported 00:17:51.168 portid: 1 00:17:51.168 trsvcid: 4420 00:17:51.168 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:51.168 traddr: 10.0.0.1 00:17:51.168 eflags: none 00:17:51.168 sectype: none 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.168 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.168 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.426 nvme0n1 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.426 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.427 nvme0n1 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.427 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.687 nvme0n1 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.687 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.688 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.947 nvme0n1 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.947 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.206 nvme0n1 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.206 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.206 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.206 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.206 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.206 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.206 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.206 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.206 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.207 nvme0n1 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:52.207 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.774 nvme0n1 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.774 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.042 nvme0n1 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.042 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.043 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.043 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.043 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.043 18:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.301 nvme0n1 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.301 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.560 nvme0n1 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.560 nvme0n1 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.560 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:53.819 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:53.820 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:53.820 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.820 18:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.385 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.643 nvme0n1 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.643 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.901 nvme0n1 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.901 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.902 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.902 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.902 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.902 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.160 nvme0n1 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.160 18:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.160 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.419 nvme0n1 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:55.419 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.420 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.678 nvme0n1 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.678 18:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.598 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 nvme0n1 00:17:57.857 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.857 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.857 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.857 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.857 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.858 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.116 nvme0n1 00:17:58.116 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.116 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.116 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.116 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.116 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.116 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.375 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.376 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.376 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.376 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.633 nvme0n1 00:17:58.633 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.633 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.633 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.634 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.201 nvme0n1 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:59.201 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.201 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:59.201 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:59.201 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:59.201 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:59.201 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.201 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.461 nvme0n1 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.461 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 nvme0n1 00:18:00.028 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.028 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.028 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.028 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.028 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.286 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.286 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.286 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.287 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.888 nvme0n1 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.888 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.455 nvme0n1 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.455 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.388 nvme0n1 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.388 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.389 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.389 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.389 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.389 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:02.389 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.389 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 nvme0n1 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 nvme0n1 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:02.954 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.955 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.213 nvme0n1 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.213 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.214 nvme0n1 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.214 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:03.472 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.473 nvme0n1 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.473 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.731 nvme0n1 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:03.731 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.732 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.990 nvme0n1 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.990 nvme0n1 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.990 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.991 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.991 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.250 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.250 nvme0n1 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.250 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.556 nvme0n1 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.556 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.557 nvme0n1 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.557 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:04.816 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.817 nvme0n1 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.817 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.076 nvme0n1 00:18:05.076 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.076 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.076 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.076 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.076 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.076 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.335 nvme0n1 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.335 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.594 nvme0n1 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.594 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.853 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.854 nvme0n1 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.854 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.113 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.374 nvme0n1 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.374 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.938 nvme0n1 00:18:06.938 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.938 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.938 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.939 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.197 nvme0n1 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.197 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.198 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.763 nvme0n1 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:07.763 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:07.764 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:07.764 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.764 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.060 nvme0n1 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.060 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.061 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.628 nvme0n1 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.628 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.568 nvme0n1 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.568 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.136 nvme0n1 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.136 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.703 nvme0n1 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.703 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.270 nvme0n1 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.270 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 nvme0n1 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 nvme0n1 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.529 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 nvme0n1 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:11.788 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.789 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.047 nvme0n1 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.047 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.048 nvme0n1 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.048 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.048 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.306 nvme0n1 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.306 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.565 nvme0n1 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.565 nvme0n1 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.565 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.824 nvme0n1 00:18:12.824 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.825 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 nvme0n1 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.084 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.084 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.084 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.084 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.084 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.084 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.350 nvme0n1 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.350 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.609 nvme0n1 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.609 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.867 nvme0n1 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.867 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.868 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.125 nvme0n1 00:18:14.125 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.125 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.125 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.125 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.126 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.126 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.126 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.384 nvme0n1 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.384 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.385 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.952 nvme0n1 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.952 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.212 nvme0n1 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.212 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.779 nvme0n1 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.779 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.038 nvme0n1 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.038 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.038 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.618 nvme0n1 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzUxYTc1N2JkMWQ5MDI5ZjE1MGVhYjkzZTI5NTQ0MGbH2b24: 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: ]] 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM3MjBkYjMxZTM3ZDZlMGY2ZThlMDdmNGVhNWVkZDM0YjNmMWMyOWFjOWFhNWIzNDBiODE2NDkxZWVhMzM1YS1z9f0=: 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.618 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.619 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:16.619 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.619 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:16.619 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:16.619 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:16.619 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.619 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.619 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.189 nvme0n1 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.189 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.754 nvme0n1 00:18:17.754 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.754 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.754 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.754 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.754 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.754 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2OWE5MTQwNTY3MDE2MWE4Y2E2MjE3MzQ4MGZjZWMY+YgT: 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: ]] 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY4MGFkZDIzYzQwNTM4YTg3ZmE3NDkwZGQ0YmU1ZGQibb5J: 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:18.012 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.013 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.577 nvme0n1 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.577 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTdlYjI0NDg3ZmU2ZWI0MzYxMjgzMmVmMDAwYWU2NTgwYzQwYzc1M2QzNzEyMWIwnszFRA==: 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: ]] 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE3ZTU1ZTYzNmRlZjE1ZDdiYmVkYTE4ZGU4M2I1OGbjEwMR: 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.578 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.144 nvme0n1 00:18:19.144 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.144 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.144 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.144 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.144 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhlZjFkZjZjZGZhOTU1OWYxYWI0OWUzM2E2ZDFhYjMxYWE1YzJhODQ5OGFjZmQ0NjQzMTQ5OTM3M2U5Nzg4NZrcnCE=: 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.144 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.710 nvme0n1 00:18:19.710 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.710 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.710 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.710 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.710 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.710 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJjYTFiMGNmODIyYzFmMGYwNzUxYzYxMjIzMjYyN2U0NTU3MGQyMzZiZmE5NDE2Botbww==: 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0ODc5MWFhMjhlNzQ3MDVhNGY1MGE2N2E1OTUyZGVkOTkxMzJlYTBlN2Y1NDAyKTcUhw==: 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.972 2024/07/24 18:05:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:19.972 request: 00:18:19.972 { 00:18:19.972 "method": "bdev_nvme_attach_controller", 00:18:19.972 "params": { 00:18:19.972 "name": "nvme0", 00:18:19.972 "trtype": "tcp", 00:18:19.972 "traddr": "10.0.0.1", 00:18:19.972 "adrfam": "ipv4", 00:18:19.972 "trsvcid": "4420", 00:18:19.972 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:19.972 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:19.972 "prchk_reftag": false, 00:18:19.972 "prchk_guard": false, 00:18:19.972 "hdgst": false, 00:18:19.972 "ddgst": false 00:18:19.972 } 00:18:19.972 } 00:18:19.972 Got JSON-RPC error response 00:18:19.972 GoRPCClient: error on JSON-RPC call 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.972 2024/07/24 18:05:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:19.972 request: 00:18:19.972 { 00:18:19.972 "method": "bdev_nvme_attach_controller", 00:18:19.972 "params": { 00:18:19.972 "name": "nvme0", 00:18:19.972 "trtype": "tcp", 00:18:19.972 "traddr": "10.0.0.1", 00:18:19.972 "adrfam": "ipv4", 00:18:19.972 "trsvcid": "4420", 00:18:19.972 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:19.972 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:19.972 "prchk_reftag": false, 00:18:19.972 "prchk_guard": false, 00:18:19.972 "hdgst": false, 00:18:19.972 "ddgst": false, 00:18:19.972 "dhchap_key": "key2" 00:18:19.972 } 00:18:19.972 } 00:18:19.972 Got JSON-RPC error response 00:18:19.972 GoRPCClient: error on JSON-RPC call 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.972 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.973 2024/07/24 18:05:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:19.973 request: 00:18:19.973 { 00:18:19.973 "method": "bdev_nvme_attach_controller", 00:18:19.973 "params": { 00:18:19.973 "name": "nvme0", 00:18:19.973 "trtype": "tcp", 00:18:19.973 "traddr": "10.0.0.1", 00:18:19.973 "adrfam": "ipv4", 00:18:19.973 "trsvcid": "4420", 00:18:19.973 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:19.973 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:19.973 "prchk_reftag": false, 00:18:19.973 "prchk_guard": false, 00:18:19.973 "hdgst": false, 00:18:19.973 "ddgst": false, 00:18:19.973 "dhchap_key": "key1", 00:18:19.973 "dhchap_ctrlr_key": "ckey2" 00:18:19.973 } 00:18:19.973 } 00:18:19.973 Got JSON-RPC error response 00:18:19.973 GoRPCClient: error on JSON-RPC call 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.973 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:20.238 rmmod nvme_tcp 00:18:20.238 rmmod nvme_fabrics 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 90789 ']' 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 90789 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 90789 ']' 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 90789 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:20.238 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90789 00:18:20.238 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:20.238 killing process with pid 90789 00:18:20.238 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:20.238 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90789' 00:18:20.238 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 90789 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 90789 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.239 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:20.497 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:21.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:21.320 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:21.320 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:21.320 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NXP /tmp/spdk.key-null.2GI /tmp/spdk.key-sha256.oRM /tmp/spdk.key-sha384.Dck /tmp/spdk.key-sha512.THb /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:21.320 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:21.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:21.886 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:21.886 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:21.886 00:18:21.886 real 0m34.981s 00:18:21.886 user 0m31.279s 00:18:21.886 sys 0m4.348s 00:18:21.886 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.886 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.886 ************************************ 00:18:21.886 END TEST nvmf_auth_host 00:18:21.886 ************************************ 00:18:21.886 18:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:21.886 18:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:21.886 18:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:21.886 18:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.886 18:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.886 ************************************ 00:18:21.886 START TEST nvmf_digest 00:18:21.886 ************************************ 00:18:21.886 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:22.145 * Looking for test storage... 00:18:22.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.145 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:22.146 Cannot find device "nvmf_tgt_br" 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.146 Cannot find device "nvmf_tgt_br2" 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:22.146 Cannot find device "nvmf_tgt_br" 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:18:22.146 18:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:22.146 Cannot find device "nvmf_tgt_br2" 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.146 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:22.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:22.404 00:18:22.404 --- 10.0.0.2 ping statistics --- 00:18:22.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.404 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:22.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:22.404 00:18:22.404 --- 10.0.0.3 ping statistics --- 00:18:22.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.404 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:22.404 00:18:22.404 --- 10.0.0.1 ping statistics --- 00:18:22.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.404 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:22.404 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:22.405 ************************************ 00:18:22.405 START TEST nvmf_digest_clean 00:18:22.405 ************************************ 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92379 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92379 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92379 ']' 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.405 18:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:22.405 [2024-07-24 18:05:29.365126] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:22.405 [2024-07-24 18:05:29.365222] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.664 [2024-07-24 18:05:29.505107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.922 [2024-07-24 18:05:29.683047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.922 [2024-07-24 18:05:29.683126] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.922 [2024-07-24 18:05:29.683142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.922 [2024-07-24 18:05:29.683155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.922 [2024-07-24 18:05:29.683166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.922 [2024-07-24 18:05:29.683217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:23.857 null0 00:18:23.857 [2024-07-24 18:05:30.705188] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.857 [2024-07-24 18:05:30.729354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92429 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92429 /var/tmp/bperf.sock 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92429 ']' 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.857 18:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:23.857 [2024-07-24 18:05:30.781008] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:23.857 [2024-07-24 18:05:30.781123] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92429 ] 00:18:24.116 [2024-07-24 18:05:30.920786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.116 [2024-07-24 18:05:31.031228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.116 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.116 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:24.116 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:24.116 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:24.116 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:24.682 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:24.682 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:24.940 nvme0n1 00:18:24.940 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:24.940 18:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:24.940 Running I/O for 2 seconds... 00:18:27.479 00:18:27.479 Latency(us) 00:18:27.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.479 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:27.479 nvme0n1 : 2.00 20322.14 79.38 0.00 0.00 6291.58 2995.93 17351.44 00:18:27.479 =================================================================================================================== 00:18:27.479 Total : 20322.14 79.38 0.00 0.00 6291.58 2995.93 17351.44 00:18:27.479 0 00:18:27.479 18:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:27.479 18:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:27.479 18:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:27.479 | select(.opcode=="crc32c") 00:18:27.479 | "\(.module_name) \(.executed)"' 00:18:27.479 18:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:27.479 18:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92429 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92429 ']' 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92429 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92429 00:18:27.479 killing process with pid 92429 00:18:27.479 Received shutdown signal, test time was about 2.000000 seconds 00:18:27.479 00:18:27.479 Latency(us) 00:18:27.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.479 =================================================================================================================== 00:18:27.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92429' 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92429 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92429 00:18:27.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92506 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92506 /var/tmp/bperf.sock 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92506 ']' 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:27.479 18:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:27.479 [2024-07-24 18:05:34.435483] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:27.479 [2024-07-24 18:05:34.436526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92506 ] 00:18:27.479 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:27.479 Zero copy mechanism will not be used. 00:18:27.737 [2024-07-24 18:05:34.579995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.737 [2024-07-24 18:05:34.699125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.672 18:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.672 18:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:28.672 18:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:28.672 18:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:28.672 18:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:28.931 18:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:28.931 18:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:29.189 nvme0n1 00:18:29.447 18:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:29.447 18:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:29.447 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:29.447 Zero copy mechanism will not be used. 00:18:29.447 Running I/O for 2 seconds... 00:18:31.347 00:18:31.347 Latency(us) 00:18:31.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.347 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:31.347 nvme0n1 : 2.00 8203.28 1025.41 0.00 0.00 1946.95 647.56 7926.74 00:18:31.347 =================================================================================================================== 00:18:31.347 Total : 8203.28 1025.41 0.00 0.00 1946.95 647.56 7926.74 00:18:31.347 0 00:18:31.347 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:31.347 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:31.347 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:31.347 | select(.opcode=="crc32c") 00:18:31.347 | "\(.module_name) \(.executed)"' 00:18:31.347 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:31.347 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92506 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92506 ']' 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92506 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92506 00:18:31.913 killing process with pid 92506 00:18:31.913 Received shutdown signal, test time was about 2.000000 seconds 00:18:31.913 00:18:31.913 Latency(us) 00:18:31.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.913 =================================================================================================================== 00:18:31.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92506' 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92506 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92506 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:31.913 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92596 00:18:31.914 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92596 /var/tmp/bperf.sock 00:18:31.914 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:31.914 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92596 ']' 00:18:31.914 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:31.914 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.914 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:31.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:31.914 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.914 18:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:31.914 [2024-07-24 18:05:38.880381] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:31.914 [2024-07-24 18:05:38.880511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92596 ] 00:18:32.172 [2024-07-24 18:05:39.027402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.172 [2024-07-24 18:05:39.137167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.129 18:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.129 18:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:33.129 18:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:33.129 18:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:33.129 18:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:33.387 18:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:33.387 18:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:33.955 nvme0n1 00:18:33.955 18:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:33.955 18:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:33.955 Running I/O for 2 seconds... 00:18:35.880 00:18:35.880 Latency(us) 00:18:35.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.880 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.880 nvme0n1 : 2.01 24526.84 95.81 0.00 0.00 5212.79 2278.16 15416.56 00:18:35.880 =================================================================================================================== 00:18:35.880 Total : 24526.84 95.81 0.00 0.00 5212.79 2278.16 15416.56 00:18:35.880 0 00:18:35.880 18:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:35.880 18:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:35.880 18:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:35.880 18:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:35.880 18:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:35.880 | select(.opcode=="crc32c") 00:18:35.880 | "\(.module_name) \(.executed)"' 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92596 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92596 ']' 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92596 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92596 00:18:36.445 killing process with pid 92596 00:18:36.445 Received shutdown signal, test time was about 2.000000 seconds 00:18:36.445 00:18:36.445 Latency(us) 00:18:36.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.445 =================================================================================================================== 00:18:36.445 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92596' 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92596 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92596 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92692 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92692 /var/tmp/bperf.sock 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92692 ']' 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:36.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.445 18:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:36.703 [2024-07-24 18:05:43.452498] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:36.703 [2024-07-24 18:05:43.452615] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92692 ] 00:18:36.703 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:36.703 Zero copy mechanism will not be used. 00:18:36.703 [2024-07-24 18:05:43.588122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.984 [2024-07-24 18:05:43.690695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.552 18:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.552 18:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:37.552 18:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:37.552 18:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:37.552 18:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:37.811 18:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:37.811 18:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:38.070 nvme0n1 00:18:38.070 18:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:38.070 18:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:38.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:38.328 Zero copy mechanism will not be used. 00:18:38.328 Running I/O for 2 seconds... 00:18:40.864 00:18:40.864 Latency(us) 00:18:40.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.864 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:40.864 nvme0n1 : 2.00 7565.24 945.66 0.00 0.00 2110.77 1341.93 3370.42 00:18:40.864 =================================================================================================================== 00:18:40.864 Total : 7565.24 945.66 0.00 0.00 2110.77 1341.93 3370.42 00:18:40.864 0 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:40.864 | select(.opcode=="crc32c") 00:18:40.864 | "\(.module_name) \(.executed)"' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92692 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92692 ']' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92692 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92692 00:18:40.864 killing process with pid 92692 00:18:40.864 Received shutdown signal, test time was about 2.000000 seconds 00:18:40.864 00:18:40.864 Latency(us) 00:18:40.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.864 =================================================================================================================== 00:18:40.864 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92692' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92692 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92692 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92379 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92379 ']' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92379 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92379 00:18:40.864 killing process with pid 92379 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92379' 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92379 00:18:40.864 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92379 00:18:41.123 00:18:41.123 real 0m18.647s 00:18:41.123 user 0m35.195s 00:18:41.123 sys 0m5.183s 00:18:41.123 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.123 ************************************ 00:18:41.123 END TEST nvmf_digest_clean 00:18:41.123 ************************************ 00:18:41.123 18:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:41.123 ************************************ 00:18:41.123 START TEST nvmf_digest_error 00:18:41.123 ************************************ 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=92804 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 92804 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92804 ']' 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.123 18:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:41.123 [2024-07-24 18:05:48.083575] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:41.123 [2024-07-24 18:05:48.083680] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.381 [2024-07-24 18:05:48.226677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.381 [2024-07-24 18:05:48.350367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.381 [2024-07-24 18:05:48.350430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.381 [2024-07-24 18:05:48.350443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.381 [2024-07-24 18:05:48.350453] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.381 [2024-07-24 18:05:48.350463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.381 [2024-07-24 18:05:48.350498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.314 [2024-07-24 18:05:49.127043] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.314 null0 00:18:42.314 [2024-07-24 18:05:49.240489] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.314 [2024-07-24 18:05:49.264615] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:42.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92854 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92854 /var/tmp/bperf.sock 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92854 ']' 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.314 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.572 [2024-07-24 18:05:49.317479] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:42.572 [2024-07-24 18:05:49.317814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92854 ] 00:18:42.572 [2024-07-24 18:05:49.455914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.830 [2024-07-24 18:05:49.578608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.395 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.395 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:43.395 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:43.395 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:43.961 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:43.961 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.961 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:43.961 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.961 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:43.961 18:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:44.219 nvme0n1 00:18:44.219 18:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:44.219 18:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.219 18:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.219 18:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.219 18:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:44.219 18:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:44.219 Running I/O for 2 seconds... 00:18:44.219 [2024-07-24 18:05:51.189328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.219 [2024-07-24 18:05:51.189406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-07-24 18:05:51.189421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.200277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.477 [2024-07-24 18:05:51.200344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.477 [2024-07-24 18:05:51.200358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.211861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.477 [2024-07-24 18:05:51.211929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.477 [2024-07-24 18:05:51.211943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.226866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.477 [2024-07-24 18:05:51.226937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.477 [2024-07-24 18:05:51.226951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.237964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.477 [2024-07-24 18:05:51.238029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.477 [2024-07-24 18:05:51.238043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.251616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.477 [2024-07-24 18:05:51.251682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.477 [2024-07-24 18:05:51.251695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.264010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.477 [2024-07-24 18:05:51.264063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.477 [2024-07-24 18:05:51.264078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.278455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.477 [2024-07-24 18:05:51.278507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.477 [2024-07-24 18:05:51.278521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.289849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.477 [2024-07-24 18:05:51.289894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.477 [2024-07-24 18:05:51.289908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.477 [2024-07-24 18:05:51.302597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.302641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.302654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.315691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.315734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.315748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.326587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.326637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.326649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.338850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.338893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.338905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.352847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.352889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.352902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.362749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.362791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.362804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.374183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.374237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.374265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.387422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.387489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.387504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.401308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.401366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.401380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.414320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.414372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.414385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.425594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.425649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.425663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.437962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.438009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.438023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.478 [2024-07-24 18:05:51.450297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.478 [2024-07-24 18:05:51.450348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.478 [2024-07-24 18:05:51.450378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.461196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.461254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.461268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.473162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.473209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.473222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.485961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.486011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.486024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.499112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.499162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.499175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.511925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.511974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.511987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.523387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.523440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.523454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.536036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.536088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.536118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.547126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.547170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.547183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.560593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.560636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.560649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.572635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.572680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.572694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.584436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.584482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.584513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.596657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.596708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.596722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.610502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.610555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.610570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.622566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.622613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.622628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.636399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.636447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.636461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.650698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.650751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.650765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.663762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.663812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.663826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.677160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.677215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.677230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.736 [2024-07-24 18:05:51.688267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.736 [2024-07-24 18:05:51.688313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.736 [2024-07-24 18:05:51.688327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.737 [2024-07-24 18:05:51.700900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.737 [2024-07-24 18:05:51.700948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.737 [2024-07-24 18:05:51.700963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.713739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.713785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.713799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.726407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.726459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.726473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.738439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.738496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.738528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.751828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.751881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.751895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.764617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.764668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.764682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.777738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.777789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.777803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.790041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.790088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.790102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.801725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.801774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.801788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.816357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.816411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.816426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.829470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.829517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.829531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.841652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.841701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.841715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.855781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.995 [2024-07-24 18:05:51.855834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.995 [2024-07-24 18:05:51.855848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.995 [2024-07-24 18:05:51.868877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.868935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.868949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.996 [2024-07-24 18:05:51.879569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.879622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.879637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.996 [2024-07-24 18:05:51.893141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.893201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.893215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.996 [2024-07-24 18:05:51.906491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.906541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.906555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.996 [2024-07-24 18:05:51.917457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.917505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.917518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.996 [2024-07-24 18:05:51.930946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.930999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.931012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.996 [2024-07-24 18:05:51.945256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.945321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.945336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.996 [2024-07-24 18:05:51.957338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.957394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.957409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.996 [2024-07-24 18:05:51.969071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:44.996 [2024-07-24 18:05:51.969125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.996 [2024-07-24 18:05:51.969139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.255 [2024-07-24 18:05:51.982371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.255 [2024-07-24 18:05:51.982418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.255 [2024-07-24 18:05:51.982432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.255 [2024-07-24 18:05:51.996152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.255 [2024-07-24 18:05:51.996203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.255 [2024-07-24 18:05:51.996217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.255 [2024-07-24 18:05:52.009016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.255 [2024-07-24 18:05:52.009065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.255 [2024-07-24 18:05:52.009079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.255 [2024-07-24 18:05:52.021546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.255 [2024-07-24 18:05:52.021591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.255 [2024-07-24 18:05:52.021605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.255 [2024-07-24 18:05:52.033563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.255 [2024-07-24 18:05:52.033624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.255 [2024-07-24 18:05:52.033639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.255 [2024-07-24 18:05:52.046892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.255 [2024-07-24 18:05:52.046954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.255 [2024-07-24 18:05:52.046968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.060057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.060119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.060135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.073025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.073097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.073111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.085955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.086021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.086035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.098316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.098384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.098399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.110506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.110563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.110577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.125428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.125486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.125501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.136663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.136725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.136739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.151746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.151810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.151825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.163744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.163808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.163823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.178106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.178170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.178184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.191690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.191763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.191777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.205320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.205387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.205402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.256 [2024-07-24 18:05:52.217666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.256 [2024-07-24 18:05:52.217730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.256 [2024-07-24 18:05:52.217745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.231130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.231195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.231210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.245208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.245286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.245302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.256802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.256863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.256877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.269498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.269563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.269577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.283026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.283086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.283102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.296324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.296379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.296394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.309195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.309269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.309284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.322601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.322665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.322679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.334669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.334731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.334746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.346985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.347048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.347062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.360085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.360149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.360164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.373786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.373842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.373857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.386723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.386780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.386795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.400543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.400620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.400635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.412315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.412360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.412374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.425409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.425457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.425470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.435924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.435968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.435982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.449122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.449163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.449177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.462809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.462847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.534 [2024-07-24 18:05:52.462876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.534 [2024-07-24 18:05:52.474027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.534 [2024-07-24 18:05:52.474077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.535 [2024-07-24 18:05:52.474091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.535 [2024-07-24 18:05:52.487854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.535 [2024-07-24 18:05:52.487917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.535 [2024-07-24 18:05:52.487932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.535 [2024-07-24 18:05:52.501317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.535 [2024-07-24 18:05:52.501370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.535 [2024-07-24 18:05:52.501400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.515296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.515348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.515362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.528602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.528654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.528669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.541043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.541101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.541115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.554215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.554284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.554299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.565443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.565494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.565508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.580212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.580277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.580291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.593210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.593294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.593309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.605951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.606011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.606023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.618460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.618516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.618529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.630172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.630231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.630257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.643414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.643466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.643488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.656166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.656224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.656238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.669008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.669068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.669083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.682286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.682344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.682358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.693555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.693602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.693615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.705690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.705746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.705761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.717094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.717144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.717157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.728224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.728284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.728299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.742498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.742548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.742562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:45.793 [2024-07-24 18:05:52.756106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:45.793 [2024-07-24 18:05:52.756154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.793 [2024-07-24 18:05:52.756185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.769747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.769795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.769809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.780699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.780746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.780760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.793618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.793664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.793677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.806606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.806665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.806696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.817449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.817494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.817507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.831188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.831272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.831288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.846488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.846557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.846572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.861565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.861633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.861648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.876253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.876317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.876331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.889210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.889300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.889320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.903992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.904059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.904077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.917216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.917306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.917327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.932049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.932110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.932125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.945285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.945362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.945383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.959414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.959493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.959509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.972071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.972136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.972151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.986554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.986637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.986659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:52.999459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:52.999562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:52.999587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:53.012409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:53.012477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:53.012493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.053 [2024-07-24 18:05:53.024946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.053 [2024-07-24 18:05:53.025034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.053 [2024-07-24 18:05:53.025048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.312 [2024-07-24 18:05:53.038535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.312 [2024-07-24 18:05:53.038623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.312 [2024-07-24 18:05:53.038643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.312 [2024-07-24 18:05:53.052963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.312 [2024-07-24 18:05:53.053050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.312 [2024-07-24 18:05:53.053070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.312 [2024-07-24 18:05:53.067273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.312 [2024-07-24 18:05:53.067358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.313 [2024-07-24 18:05:53.067378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.313 [2024-07-24 18:05:53.079405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.313 [2024-07-24 18:05:53.079505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.313 [2024-07-24 18:05:53.079526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.313 [2024-07-24 18:05:53.091908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.313 [2024-07-24 18:05:53.091978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.313 [2024-07-24 18:05:53.091994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.313 [2024-07-24 18:05:53.106542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.313 [2024-07-24 18:05:53.106616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.313 [2024-07-24 18:05:53.106631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.313 [2024-07-24 18:05:53.122570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.313 [2024-07-24 18:05:53.122668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.313 [2024-07-24 18:05:53.122695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.313 [2024-07-24 18:05:53.140893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.313 [2024-07-24 18:05:53.140985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.313 [2024-07-24 18:05:53.141010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.313 [2024-07-24 18:05:53.155260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.313 [2024-07-24 18:05:53.155347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.313 [2024-07-24 18:05:53.155367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.313 [2024-07-24 18:05:53.167143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1405e30) 00:18:46.313 [2024-07-24 18:05:53.167216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.313 [2024-07-24 18:05:53.167231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.313 00:18:46.313 Latency(us) 00:18:46.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.313 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:46.313 nvme0n1 : 2.00 19658.76 76.79 0.00 0.00 6503.20 3198.78 18849.40 00:18:46.313 =================================================================================================================== 00:18:46.313 Total : 19658.76 76.79 0.00 0.00 6503.20 3198.78 18849.40 00:18:46.313 0 00:18:46.313 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:46.313 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:46.313 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:46.313 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:46.313 | .driver_specific 00:18:46.313 | .nvme_error 00:18:46.313 | .status_code 00:18:46.313 | .command_transient_transport_error' 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92854 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92854 ']' 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92854 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92854 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92854' 00:18:46.571 killing process with pid 92854 00:18:46.571 Received shutdown signal, test time was about 2.000000 seconds 00:18:46.571 00:18:46.571 Latency(us) 00:18:46.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.571 =================================================================================================================== 00:18:46.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92854 00:18:46.571 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92854 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92940 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92940 /var/tmp/bperf.sock 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92940 ']' 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.830 18:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.830 [2024-07-24 18:05:53.778157] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:46.830 [2024-07-24 18:05:53.778900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92940 ] 00:18:46.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:46.830 Zero copy mechanism will not be used. 00:18:47.088 [2024-07-24 18:05:53.922908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.088 [2024-07-24 18:05:54.037086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.022 18:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.022 18:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:48.022 18:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:48.022 18:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:48.279 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:48.279 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.279 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:48.279 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.279 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:48.279 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:48.846 nvme0n1 00:18:48.846 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:48.846 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.846 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:48.846 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.846 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:48.846 18:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:49.106 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:49.106 Zero copy mechanism will not be used. 00:18:49.106 Running I/O for 2 seconds... 00:18:49.106 [2024-07-24 18:05:55.889728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.889799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.889815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.894218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.894286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.894301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.899700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.899751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.899766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.904419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.904462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.904475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.910792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.910839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.910852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.916706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.916749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.916762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.921665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.921705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.921718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.927919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.927961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.927974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.932411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.932454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.932467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.936926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.936968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.936980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.942339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.942381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.942393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.946335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.946373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.946385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.951297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.951337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.951349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.955946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.955986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.955998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.960026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.960066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.960078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.964787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.964824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.964835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.968264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.968301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.968313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.973129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.973165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.973176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.977838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.977876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.977888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.982583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.982621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.982632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.986919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.986963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.986976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.991563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.991607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.991620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:55.995715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:55.995759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.106 [2024-07-24 18:05:55.995772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.106 [2024-07-24 18:05:56.000998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.106 [2024-07-24 18:05:56.001042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.001055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.005021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.005072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.005086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.009282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.009324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.009337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.012968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.013014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.013026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.016592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.016638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.016650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.020714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.020772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.020786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.024380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.024424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.024438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.028246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.028309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.028323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.031672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.031718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.031732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.035048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.035093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.035123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.039348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.039405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.039436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.043406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.043456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.043470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.046687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.046733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.046762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.050176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.050221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.050235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.054100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.054145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.054175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.057613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.057684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.057697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.061123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.061167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.061180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.065329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.065370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.065383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.068432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.068476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.068506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.072215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.072270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.072284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.076325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.076372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.076385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.107 [2024-07-24 18:05:56.080611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.107 [2024-07-24 18:05:56.080657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.107 [2024-07-24 18:05:56.080670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.383 [2024-07-24 18:05:56.083397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.383 [2024-07-24 18:05:56.083438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.383 [2024-07-24 18:05:56.083467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.087380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.087422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.087436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.091287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.091331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.091343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.094851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.094894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.094923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.098631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.098675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.098687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.102484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.102528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.102540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.105774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.105815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.105827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.110031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.110077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.110090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.114942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.114989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.115003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.118564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.118611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.118625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.122773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.122822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.122836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.127753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.127804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.127834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.132165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.132216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.132247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.135457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.135511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.135524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.139967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.140018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.140032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.143223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.143278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.143291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.147238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.147298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.147311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.151844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.151892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.151922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.155192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.155237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.155262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.159443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.159499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.159513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.163611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.163683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.163697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.167869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.167922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.167937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.170957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.171008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.171021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.175941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.175992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.176006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.180589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.180659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.180673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.183866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.183912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.183926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.187358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.384 [2024-07-24 18:05:56.187405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.384 [2024-07-24 18:05:56.187435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.384 [2024-07-24 18:05:56.191296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.191336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.191366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.195318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.195360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.195373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.198542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.198587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.198599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.202798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.202845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.202859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.207666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.207714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.207728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.211149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.211205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.211222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.215355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.215409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.215425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.220482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.220537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.220552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.225578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.225639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.225653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.229511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.229567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.229584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.234253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.234326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.234341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.238954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.239006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.239020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.244128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.244184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.244200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.249034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.249089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.249103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.252131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.252181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.252196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.257075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.257120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.257151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.260043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.260090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.260120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.263852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.263897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.263927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.267516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.267583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.267597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.271026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.271071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.271084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.275038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.275089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.275102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.279302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.279345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.279358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.282635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.282676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.282689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.286597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.286659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.286680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.292200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.292280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.292296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.295799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.385 [2024-07-24 18:05:56.295851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.385 [2024-07-24 18:05:56.295867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.385 [2024-07-24 18:05:56.300270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.300324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.300338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.303769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.303816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.303830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.307176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.307221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.307250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.311262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.311308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.311322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.316291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.316342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.316356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.321247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.321308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.321321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.324119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.324168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.324182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.328348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.328392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.328406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.331862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.331910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.331923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.336165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.336215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.336228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.340991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.341046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.341060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.343786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.343829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.343842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.347885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.347935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.347949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.351584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.351632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.351645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.386 [2024-07-24 18:05:56.354712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.386 [2024-07-24 18:05:56.354754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.386 [2024-07-24 18:05:56.354767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.646 [2024-07-24 18:05:56.359412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.646 [2024-07-24 18:05:56.359457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.646 [2024-07-24 18:05:56.359471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.646 [2024-07-24 18:05:56.363105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.646 [2024-07-24 18:05:56.363152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.646 [2024-07-24 18:05:56.363165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.646 [2024-07-24 18:05:56.366380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.646 [2024-07-24 18:05:56.366425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.646 [2024-07-24 18:05:56.366438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.370160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.370209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.370239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.375674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.375726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.375740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.380111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.380157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.380170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.383198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.383254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.383269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.387083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.387126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.387139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.391177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.391224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.391238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.395717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.395761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.395775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.398412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.398453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.398466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.403211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.403286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.403299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.406681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.406723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.406736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.410360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.410402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.410416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.415287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.415333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.415363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.419695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.419737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.419750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.422975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.423015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.423027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.427786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.427829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.427843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.432435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.432479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.432492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.435554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.435594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.435608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.439819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.439863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.439875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.444455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.444500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.444514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.447543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.447581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.447594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.451279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.451316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.451328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.455314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.455349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.455361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.459537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.459578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.459591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.462592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.462648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.462660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.467191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.467233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.467258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.471657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.471699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.647 [2024-07-24 18:05:56.471712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.647 [2024-07-24 18:05:56.475896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.647 [2024-07-24 18:05:56.475935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.475948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.478456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.478490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.478503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.482366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.482404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.482416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.486210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.486271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.486284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.489360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.489396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.489423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.492703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.492744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.492757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.495802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.495843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.495856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.499023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.499063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.499075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.502310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.502348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.502360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.505800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.505846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.505859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.509541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.509582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.509595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.513675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.513719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.513732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.518634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.518678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.518692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.522861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.522908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.522922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.526055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.526101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.526115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.530894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.530940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.530955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.535728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.535780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.535796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.538841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.538883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.538896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.543401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.543461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.543492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.548123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.548173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.548189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.552380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.552443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.552465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.556353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.556397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.556412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.560535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.560587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.560604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.565651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.565709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.565724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.569230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.569298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.569313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.573550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.573598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.573612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.648 [2024-07-24 18:05:56.578590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.648 [2024-07-24 18:05:56.578658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.648 [2024-07-24 18:05:56.578679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.583522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.583575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.583592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.587140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.587187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.587201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.590701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.590748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.590778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.594925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.594968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.594981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.598621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.598665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.598678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.602876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.602940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.602961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.607220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.607288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.607306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.611068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.611111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.611125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.615246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.615305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.615337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.649 [2024-07-24 18:05:56.619563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.649 [2024-07-24 18:05:56.619608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.649 [2024-07-24 18:05:56.619624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.622532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.622572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.622585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.626423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.626465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.626478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.629962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.630004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.630016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.633951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.633994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.634007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.637961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.638002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.638015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.641296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.641352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.641365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.645527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.645571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.645584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.648701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.648746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.648759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.652635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.652693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.652723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.656194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.656256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.656271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.660335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.660379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.660393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.664507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.664551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.664564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.667742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.667788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.667801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.672406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.672450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.909 [2024-07-24 18:05:56.672463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.909 [2024-07-24 18:05:56.675713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.909 [2024-07-24 18:05:56.675754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.675767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.679629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.679679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.679708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.683769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.683813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.683841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.686738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.686776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.686788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.690621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.690664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.690676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.693872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.693914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.693927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.698101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.698148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.698161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.702422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.702467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.702497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.705600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.705643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.705655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.709824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.709871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.709884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.714439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.714485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.714498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.718003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.718046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.718075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.721469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.721511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.721539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.724930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.724973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.724985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.728446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.728488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.728518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.731846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.731890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.731904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.735232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.735282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.735296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.739311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.739357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.739370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.743551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.743599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.743613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.746911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.746953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.746966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.751002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.751048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.751062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.755933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.755981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.755994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.760662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.760710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.760723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.764220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.764275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.764288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.768210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.768267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.768280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.772844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.772888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.772901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.776304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.910 [2024-07-24 18:05:56.776345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.910 [2024-07-24 18:05:56.776358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.910 [2024-07-24 18:05:56.780523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.780567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.780581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.785193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.785237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.785261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.789635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.789679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.789692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.793035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.793080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.793093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.797118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.797166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.797179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.801632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.801679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.801692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.804979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.805028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.805041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.808628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.808677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.808690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.814212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.814280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.814294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.819037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.819092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.819107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.822033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.822076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.822090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.826930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.826983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.826997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.830612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.830669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.830682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.834122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.834178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.834192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.839092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.839148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.839162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.842946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.842995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.843026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.847142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.847192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.847206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.850492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.850536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.850565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.854436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.854485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.854499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.858975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.859024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.859038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.862691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.862738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.862751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.866720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.866767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.866782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.869843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.869887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.869901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.874497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.874557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.874571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.878832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.878878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.878891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.911 [2024-07-24 18:05:56.882038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:49.911 [2024-07-24 18:05:56.882082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.911 [2024-07-24 18:05:56.882094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.171 [2024-07-24 18:05:56.886422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.171 [2024-07-24 18:05:56.886467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.171 [2024-07-24 18:05:56.886480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.171 [2024-07-24 18:05:56.891188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.171 [2024-07-24 18:05:56.891237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.171 [2024-07-24 18:05:56.891263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.171 [2024-07-24 18:05:56.896407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.171 [2024-07-24 18:05:56.896456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.171 [2024-07-24 18:05:56.896470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.171 [2024-07-24 18:05:56.899858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.171 [2024-07-24 18:05:56.899904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.171 [2024-07-24 18:05:56.899917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.171 [2024-07-24 18:05:56.904136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.171 [2024-07-24 18:05:56.904182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.904196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.907161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.907202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.907216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.911263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.911308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.911321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.915526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.915575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.915589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.919477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.919528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.919549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.922680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.922722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.922735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.927065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.927106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.927118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.931324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.931365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.931378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.934098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.934136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.934147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.938518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.938561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.938573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.941459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.941499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.941511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.945490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.945531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.945543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.948959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.949002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.949015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.952525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.952581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.952593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.956493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.956539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.956552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.960157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.960202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.960215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.963700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.963740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.963753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.967321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.967359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.967371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.971118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.971159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.971172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.974697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.974737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.974749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.978317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.978356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.978370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.982091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.982134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.982146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.985748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.985786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.985798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.989074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.989113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.989126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.992707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.992748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.992759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:56.996884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:56.996926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:56.996939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.172 [2024-07-24 18:05:57.000242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.172 [2024-07-24 18:05:57.000296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.172 [2024-07-24 18:05:57.000310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.003932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.003975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.003988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.007596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.007636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.007649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.011091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.011130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.011142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.014831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.014870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.014882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.018084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.018126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.018138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.021918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.021957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.021969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.025245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.025296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.025308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.028889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.028930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.028942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.032905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.032946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.032957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.036235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.036286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.036300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.040271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.040312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.040325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.043907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.043952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.043965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.047997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.048041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.048054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.051174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.051219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.051232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.054826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.054873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.054886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.058500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.058547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.058559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.062929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.062981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.062994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.066185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.066234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.066263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.071059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.071130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.071151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.076656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.076710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.076724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.080978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.081026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.081040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.084470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.084517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.084530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.088366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.088411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.088424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.092157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.092202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.092215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.096289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.096350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.096369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.101468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.173 [2024-07-24 18:05:57.101525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.173 [2024-07-24 18:05:57.101540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.173 [2024-07-24 18:05:57.106524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.106571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.106584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.174 [2024-07-24 18:05:57.111176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.111222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.111236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.174 [2024-07-24 18:05:57.114650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.114704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.114721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.174 [2024-07-24 18:05:57.120230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.120299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.120315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.174 [2024-07-24 18:05:57.124774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.124823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.124837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.174 [2024-07-24 18:05:57.129197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.129287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.129303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.174 [2024-07-24 18:05:57.132552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.132600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.132630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.174 [2024-07-24 18:05:57.138022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.138075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.138090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.174 [2024-07-24 18:05:57.141603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.174 [2024-07-24 18:05:57.141652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.174 [2024-07-24 18:05:57.141666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.146186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.146265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.146284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.150027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.150077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.150091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.153266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.153310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.153324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.158475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.158541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.158562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.162727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.162777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.162791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.165878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.165923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.165937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.170510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.170557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.170570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.175640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.175690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.175706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.434 [2024-07-24 18:05:57.178870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.434 [2024-07-24 18:05:57.178909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.434 [2024-07-24 18:05:57.178922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.182863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.182905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.182918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.187204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.187256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.187268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.191688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.191733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.191746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.195780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.195817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.195845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.199424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.199460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.199472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.202894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.202931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.202943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.206559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.206600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.206612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.210306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.210345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.210357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.213679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.213719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.213731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.217527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.217569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.217582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.220885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.220926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.220938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.224567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.224613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.224637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.228513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.228557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.228570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.231323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.231371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.231390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.236112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.236164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.236181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.240859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.240906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.240935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.245175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.245219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.245233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.248238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.248295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.248308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.253874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.253925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.253939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.258502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.258547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.258578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.261662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.261704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.261717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.265921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.265966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.265979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.270029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.270072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.270085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.273370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.273412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.273424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.277762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.277808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.277821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.282155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.435 [2024-07-24 18:05:57.282215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.435 [2024-07-24 18:05:57.282228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.435 [2024-07-24 18:05:57.285619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.285659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.285671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.289480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.289521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.289533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.294452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.294502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.294518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.299039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.299081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.299094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.301845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.301884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.301897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.306180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.306223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.306236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.309677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.309718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.309730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.313102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.313142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.313154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.316517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.316561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.316574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.320014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.320058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.320072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.324096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.324138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.324167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.328444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.328487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.328500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.331909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.331951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.331964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.336003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.336047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.336060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.340637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.340681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.340695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.344054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.344099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.344112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.348352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.348401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.348416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.352554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.352600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.352613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.355911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.355957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.355970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.359554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.359598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.359611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.363286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.363330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.363346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.367546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.367590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.367604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.371429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.371472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.371493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.374701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.374742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.374755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.380187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.380262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.380285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.384843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.436 [2024-07-24 18:05:57.384895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.436 [2024-07-24 18:05:57.384912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.436 [2024-07-24 18:05:57.388293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.437 [2024-07-24 18:05:57.388337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.437 [2024-07-24 18:05:57.388351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.437 [2024-07-24 18:05:57.392481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.437 [2024-07-24 18:05:57.392525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.437 [2024-07-24 18:05:57.392539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.437 [2024-07-24 18:05:57.396972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.437 [2024-07-24 18:05:57.397020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.437 [2024-07-24 18:05:57.397033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.437 [2024-07-24 18:05:57.401363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.437 [2024-07-24 18:05:57.401413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.437 [2024-07-24 18:05:57.401427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.437 [2024-07-24 18:05:57.404224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.437 [2024-07-24 18:05:57.404289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.437 [2024-07-24 18:05:57.404307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.408797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.408849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.408864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.413065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.413113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.413127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.416902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.416986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.417008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.421398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.421450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.421465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.425543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.425590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.425602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.429382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.429431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.429447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.432605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.432656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.432670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.436809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.436859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.436873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.441040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.441088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.441102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.444471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.444518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.444532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.448059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.448107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.448122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.451684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.451727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.451741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.697 [2024-07-24 18:05:57.456085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.697 [2024-07-24 18:05:57.456144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-07-24 18:05:57.456159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.461220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.461288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.461303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.464775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.464818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.464831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.468918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.468961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.468974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.473324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.473364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.473376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.477608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.477651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.477664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.480764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.480805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.480818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.484140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.484181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.484193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.488102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.488145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.488157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.492875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.492920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.492932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.495956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.495994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.496006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.499757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.499801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.499814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.503744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.503786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.503798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.507652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.507694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.507707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.510970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.511009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.511021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.514473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.514515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.514527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.517736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.517777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.517789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.521269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.521309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.521321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.524965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.525007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.525019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.529156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.529200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.529212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.532334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.532372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.532384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.535599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.535638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.535650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.539306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.539343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.539355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.542772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.542811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.542823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.546701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.546745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.546758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.549817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.549858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.549870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.698 [2024-07-24 18:05:57.554069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.698 [2024-07-24 18:05:57.554113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.698 [2024-07-24 18:05:57.554126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.557652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.557693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.557706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.561107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.561148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.561160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.564195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.564235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.564260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.567601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.567640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.567652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.571002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.571042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.571055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.574389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.574428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.574440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.578480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.578523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.578535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.583141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.583187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.583199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.586687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.586724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.586737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.590697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.590740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.590752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.595230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.595287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.595300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.598428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.598466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.598478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.602271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.602310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.602325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.606367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.606408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.606421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.609779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.609821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.609833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.613253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.613294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.613307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.617338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.617380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.617392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.620472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.620514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.620527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.624825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.624869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.624882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.629066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.629112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.629125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.632004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.632043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.632056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.636064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.636105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.636118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.639691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.639733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.639746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.643507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.643545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.643558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.648007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.648049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.648061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.651181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.651218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-07-24 18:05:57.651231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.699 [2024-07-24 18:05:57.655578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.699 [2024-07-24 18:05:57.655617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.700 [2024-07-24 18:05:57.655629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.700 [2024-07-24 18:05:57.659975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.700 [2024-07-24 18:05:57.660016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.700 [2024-07-24 18:05:57.660045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.700 [2024-07-24 18:05:57.662747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.700 [2024-07-24 18:05:57.662784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.700 [2024-07-24 18:05:57.662796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.700 [2024-07-24 18:05:57.667042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.700 [2024-07-24 18:05:57.667084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.700 [2024-07-24 18:05:57.667096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.959 [2024-07-24 18:05:57.671355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.959 [2024-07-24 18:05:57.671394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.671423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.674287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.674321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.674332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.678055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.678093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.678105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.681437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.681476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.681505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.684953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.684991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.685020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.688330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.688369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.688381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.692218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.692270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.692283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.696454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.696495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.696508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.699653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.699690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.699703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.703312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.703348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.703376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.707977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.708021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.708034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.711374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.711411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.711439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.715162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.715203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.715215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.719290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.719330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.719359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.722821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.722883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.722905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.726942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.726989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.727003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.731212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.731274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.731292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.734751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.734795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.734808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.960 [2024-07-24 18:05:57.738627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.960 [2024-07-24 18:05:57.738674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.960 [2024-07-24 18:05:57.738688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.742266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.742309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.742323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.745862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.745908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.745921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.750061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.750107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.750120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.754720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.754766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.754780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.758317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.758357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.758371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.762468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.762511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.762525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.766847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.766893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.766906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.770193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.770237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.770265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.774766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.774810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.774824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.777947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.777990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.778003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.782064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.782107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.782120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.786004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.786046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.786060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.789444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.789486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.789515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.793634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.793681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.793694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.798025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.798085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.798107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.802496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.802554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.802571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.807510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.807557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.807576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.810649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.961 [2024-07-24 18:05:57.810690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.961 [2024-07-24 18:05:57.810704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.961 [2024-07-24 18:05:57.814890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.814933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.814946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.818735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.818780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.818793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.822502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.822548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.822561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.826829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.826877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.826890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.830118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.830161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.830174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.833970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.834015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.834028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.837691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.837735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.837749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.841410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.841453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.841467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.845737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.845783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.845796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.848950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.848992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.849007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.853190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.853234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.853267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.858331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.858375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.858388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.862889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.862935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.862948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.866077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.866118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.866130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.870188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.870231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.870258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.962 [2024-07-24 18:05:57.874498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f03380) 00:18:50.962 [2024-07-24 18:05:57.874550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.962 [2024-07-24 18:05:57.874563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.962 00:18:50.962 Latency(us) 00:18:50.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.962 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:50.962 nvme0n1 : 2.00 7758.38 969.80 0.00 0.00 2058.61 612.45 11734.06 00:18:50.962 =================================================================================================================== 00:18:50.962 Total : 7758.38 969.80 0.00 0.00 2058.61 612.45 11734.06 00:18:50.962 0 00:18:50.963 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:50.963 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:50.963 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:50.963 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:50.963 | .driver_specific 00:18:50.963 | .nvme_error 00:18:50.963 | .status_code 00:18:50.963 | .command_transient_transport_error' 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 500 > 0 )) 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92940 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92940 ']' 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92940 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92940 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:51.556 killing process with pid 92940 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92940' 00:18:51.556 Received shutdown signal, test time was about 2.000000 seconds 00:18:51.556 00:18:51.556 Latency(us) 00:18:51.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.556 =================================================================================================================== 00:18:51.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92940 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92940 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93036 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93036 /var/tmp/bperf.sock 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93036 ']' 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.556 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:51.556 [2024-07-24 18:05:58.523585] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:51.556 [2024-07-24 18:05:58.523698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93036 ] 00:18:51.815 [2024-07-24 18:05:58.668266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.073 [2024-07-24 18:05:58.802104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.639 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:52.640 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:52.640 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:52.640 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:52.898 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:52.898 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.899 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:52.899 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.899 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:52.899 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:53.158 nvme0n1 00:18:53.158 18:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:53.158 18:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.158 18:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.158 18:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.158 18:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:53.158 18:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:53.417 Running I/O for 2 seconds... 00:18:53.417 [2024-07-24 18:06:00.261753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ee5c8 00:18:53.417 [2024-07-24 18:06:00.262612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.262675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.273770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ef6a8 00:18:53.417 [2024-07-24 18:06:00.275155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.275198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.283656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f6cc8 00:18:53.417 [2024-07-24 18:06:00.284913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.284950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.293877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190feb58 00:18:53.417 [2024-07-24 18:06:00.294656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.294695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.303746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fb048 00:18:53.417 [2024-07-24 18:06:00.304429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.304465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.313816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e3d08 00:18:53.417 [2024-07-24 18:06:00.314322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.314359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.326128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ddc00 00:18:53.417 [2024-07-24 18:06:00.327377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.327412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.335772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f0bc0 00:18:53.417 [2024-07-24 18:06:00.337444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.337482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.347257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f5be8 00:18:53.417 [2024-07-24 18:06:00.348100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.348138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.356558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f9b30 00:18:53.417 [2024-07-24 18:06:00.357546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.357580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.366086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f3a28 00:18:53.417 [2024-07-24 18:06:00.366892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.366926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.377768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eff18 00:18:53.417 [2024-07-24 18:06:00.379078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.379114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.417 [2024-07-24 18:06:00.386844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f1430 00:18:53.417 [2024-07-24 18:06:00.387979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.417 [2024-07-24 18:06:00.388015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.676 [2024-07-24 18:06:00.396417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fd208 00:18:53.676 [2024-07-24 18:06:00.397649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.676 [2024-07-24 18:06:00.397682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:53.676 [2024-07-24 18:06:00.406353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ddc00 00:18:53.676 [2024-07-24 18:06:00.407248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.676 [2024-07-24 18:06:00.407291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:53.676 [2024-07-24 18:06:00.416501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f8a50 00:18:53.676 [2024-07-24 18:06:00.417363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.676 [2024-07-24 18:06:00.417397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:53.676 [2024-07-24 18:06:00.427973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fac10 00:18:53.676 [2024-07-24 18:06:00.429223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.676 [2024-07-24 18:06:00.429270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:53.676 [2024-07-24 18:06:00.437830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e6b70 00:18:53.676 [2024-07-24 18:06:00.438940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.676 [2024-07-24 18:06:00.438975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:53.676 [2024-07-24 18:06:00.447905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e01f8 00:18:53.676 [2024-07-24 18:06:00.448908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.676 [2024-07-24 18:06:00.448942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:53.676 [2024-07-24 18:06:00.458107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f81e0 00:18:53.676 [2024-07-24 18:06:00.458938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.676 [2024-07-24 18:06:00.458973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:53.676 [2024-07-24 18:06:00.468952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f7970 00:18:53.677 [2024-07-24 18:06:00.469807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.469842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.482292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f2d80 00:18:53.677 [2024-07-24 18:06:00.483789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.483827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.492045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ed920 00:18:53.677 [2024-07-24 18:06:00.493751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.493787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.503796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e49b0 00:18:53.677 [2024-07-24 18:06:00.504735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.504772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.513797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e6b70 00:18:53.677 [2024-07-24 18:06:00.514518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.514553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.523342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fb480 00:18:53.677 [2024-07-24 18:06:00.523954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.523994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.535139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e1710 00:18:53.677 [2024-07-24 18:06:00.536732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.536768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.542406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f5378 00:18:53.677 [2024-07-24 18:06:00.543119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.543154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.553006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e3d08 00:18:53.677 [2024-07-24 18:06:00.553869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.553906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.565103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f6020 00:18:53.677 [2024-07-24 18:06:00.566420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.566455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.574430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f0bc0 00:18:53.677 [2024-07-24 18:06:00.575603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.575640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.584976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f5be8 00:18:53.677 [2024-07-24 18:06:00.585870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.585906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.594753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e3498 00:18:53.677 [2024-07-24 18:06:00.595937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.595975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.605062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f20d8 00:18:53.677 [2024-07-24 18:06:00.606132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.606169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.615180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e1710 00:18:53.677 [2024-07-24 18:06:00.616113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.616150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.627563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f8618 00:18:53.677 [2024-07-24 18:06:00.628465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.628504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.637864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190df988 00:18:53.677 [2024-07-24 18:06:00.638673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.638718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:53.677 [2024-07-24 18:06:00.648060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f4f40 00:18:53.677 [2024-07-24 18:06:00.648655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.677 [2024-07-24 18:06:00.648689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.658508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190de038 00:18:53.936 [2024-07-24 18:06:00.659401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.659437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.668148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f7538 00:18:53.936 [2024-07-24 18:06:00.668875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.668910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.680122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f8618 00:18:53.936 [2024-07-24 18:06:00.681392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.681428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.690882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e5658 00:18:53.936 [2024-07-24 18:06:00.692091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.692128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.703839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ff3c8 00:18:53.936 [2024-07-24 18:06:00.705693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.705729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.711421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e1710 00:18:53.936 [2024-07-24 18:06:00.712340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.712375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.722275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f4f40 00:18:53.936 [2024-07-24 18:06:00.723351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.723387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.732714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ef270 00:18:53.936 [2024-07-24 18:06:00.733352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.733388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.746330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190dfdc0 00:18:53.936 [2024-07-24 18:06:00.748216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.748262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.936 [2024-07-24 18:06:00.756270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e1f80 00:18:53.936 [2024-07-24 18:06:00.757888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.936 [2024-07-24 18:06:00.757969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.766789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e0ea0 00:18:53.937 [2024-07-24 18:06:00.768560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.768605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.776387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eea00 00:18:53.937 [2024-07-24 18:06:00.777169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.777207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.787769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f2948 00:18:53.937 [2024-07-24 18:06:00.788696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.788734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.801022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e0ea0 00:18:53.937 [2024-07-24 18:06:00.802598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.802639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.812425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e8088 00:18:53.937 [2024-07-24 18:06:00.814156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.814194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.823311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e1b48 00:18:53.937 [2024-07-24 18:06:00.825024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.825062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.833655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fda78 00:18:53.937 [2024-07-24 18:06:00.835165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.835201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.843509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f4f40 00:18:53.937 [2024-07-24 18:06:00.844734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.844772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.854071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e4140 00:18:53.937 [2024-07-24 18:06:00.855291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.855326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.865060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fda78 00:18:53.937 [2024-07-24 18:06:00.866291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.866330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.874894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190efae0 00:18:53.937 [2024-07-24 18:06:00.875982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.876017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.886982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190feb58 00:18:53.937 [2024-07-24 18:06:00.888650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.888681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.897663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e9168 00:18:53.937 [2024-07-24 18:06:00.899313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.899345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.937 [2024-07-24 18:06:00.906310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ea680 00:18:53.937 [2024-07-24 18:06:00.907514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.937 [2024-07-24 18:06:00.907549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:54.196 [2024-07-24 18:06:00.916870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fb480 00:18:54.196 [2024-07-24 18:06:00.918056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.196 [2024-07-24 18:06:00.918091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:54.196 [2024-07-24 18:06:00.926737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e0ea0 00:18:54.196 [2024-07-24 18:06:00.927841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.196 [2024-07-24 18:06:00.927876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:54.196 [2024-07-24 18:06:00.936520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e0ea0 00:18:54.196 [2024-07-24 18:06:00.937427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.196 [2024-07-24 18:06:00.937464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:54.196 [2024-07-24 18:06:00.947219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e12d8 00:18:54.196 [2024-07-24 18:06:00.948340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.196 [2024-07-24 18:06:00.948373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:54.196 [2024-07-24 18:06:00.957540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fc560 00:18:54.196 [2024-07-24 18:06:00.958513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.196 [2024-07-24 18:06:00.958548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:54.196 [2024-07-24 18:06:00.970093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e6b70 00:18:54.196 [2024-07-24 18:06:00.971682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.196 [2024-07-24 18:06:00.971716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:54.196 [2024-07-24 18:06:00.977338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190df988 00:18:54.196 [2024-07-24 18:06:00.978054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:00.978083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:00.987635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e2c28 00:18:54.197 [2024-07-24 18:06:00.988425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:00.988458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.000164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ddc00 00:18:54.197 [2024-07-24 18:06:01.001536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.001577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.010816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f2948 00:18:54.197 [2024-07-24 18:06:01.012378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.012414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.021059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190de8a8 00:18:54.197 [2024-07-24 18:06:01.022267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.022318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.031238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f0788 00:18:54.197 [2024-07-24 18:06:01.032494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.032529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.040760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fb048 00:18:54.197 [2024-07-24 18:06:01.041617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.041651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.050523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fdeb0 00:18:54.197 [2024-07-24 18:06:01.051328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.051363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.060292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e0630 00:18:54.197 [2024-07-24 18:06:01.060952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.060986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.070954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e0ea0 00:18:54.197 [2024-07-24 18:06:01.071825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.071861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.082198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ec408 00:18:54.197 [2024-07-24 18:06:01.083179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.083217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.095080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eaef0 00:18:54.197 [2024-07-24 18:06:01.096661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.096698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.102727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fbcf0 00:18:54.197 [2024-07-24 18:06:01.103425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.103463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.115711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eaab8 00:18:54.197 [2024-07-24 18:06:01.116879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.116920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.125948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e8088 00:18:54.197 [2024-07-24 18:06:01.126965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.127007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.136395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e73e0 00:18:54.197 [2024-07-24 18:06:01.137281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.137325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.149478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eee38 00:18:54.197 [2024-07-24 18:06:01.151104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.151145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.157298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f3a28 00:18:54.197 [2024-07-24 18:06:01.158026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.158068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:54.197 [2024-07-24 18:06:01.170195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e9e10 00:18:54.197 [2024-07-24 18:06:01.171440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.197 [2024-07-24 18:06:01.171481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:54.456 [2024-07-24 18:06:01.180228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fdeb0 00:18:54.456 [2024-07-24 18:06:01.181355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.456 [2024-07-24 18:06:01.181396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:54.456 [2024-07-24 18:06:01.190614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f0350 00:18:54.456 [2024-07-24 18:06:01.191684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.191726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.201030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e4de8 00:18:54.457 [2024-07-24 18:06:01.201648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.201690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.212936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f9f68 00:18:54.457 [2024-07-24 18:06:01.214299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.214338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.222396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f8618 00:18:54.457 [2024-07-24 18:06:01.224114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.224154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.234477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f96f8 00:18:54.457 [2024-07-24 18:06:01.235836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.235877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.245540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e1f80 00:18:54.457 [2024-07-24 18:06:01.246922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.246997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.255141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f7da8 00:18:54.457 [2024-07-24 18:06:01.256097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.256143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.265077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fc128 00:18:54.457 [2024-07-24 18:06:01.265868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.265905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.277093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f0bc0 00:18:54.457 [2024-07-24 18:06:01.278330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.278371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.289487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eb328 00:18:54.457 [2024-07-24 18:06:01.291325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.291364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.296913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fd640 00:18:54.457 [2024-07-24 18:06:01.297730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.297769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.309322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e8d30 00:18:54.457 [2024-07-24 18:06:01.310705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.310747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.317631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fbcf0 00:18:54.457 [2024-07-24 18:06:01.318410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.318451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.329880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190de038 00:18:54.457 [2024-07-24 18:06:01.331298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.331338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.339980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fcdd0 00:18:54.457 [2024-07-24 18:06:01.341190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.341229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.349976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fb480 00:18:54.457 [2024-07-24 18:06:01.351096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.351138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.360770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ee5c8 00:18:54.457 [2024-07-24 18:06:01.361902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.361943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.372422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e49b0 00:18:54.457 [2024-07-24 18:06:01.373709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.373750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.383559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e4de8 00:18:54.457 [2024-07-24 18:06:01.384396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.384438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.394209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ff3c8 00:18:54.457 [2024-07-24 18:06:01.394953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.394995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.405889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f35f0 00:18:54.457 [2024-07-24 18:06:01.406712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.406752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.416609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fe2e8 00:18:54.457 [2024-07-24 18:06:01.417682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.417723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:54.457 [2024-07-24 18:06:01.427507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f96f8 00:18:54.457 [2024-07-24 18:06:01.428491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.457 [2024-07-24 18:06:01.428528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.438054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e9e10 00:18:54.716 [2024-07-24 18:06:01.438878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.438917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.449102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e8d30 00:18:54.716 [2024-07-24 18:06:01.450074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.450110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.460420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e23b8 00:18:54.716 [2024-07-24 18:06:01.461387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.461424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.473481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ec408 00:18:54.716 [2024-07-24 18:06:01.475074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.475111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.484767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fe720 00:18:54.716 [2024-07-24 18:06:01.486329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.486365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.493928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f46d0 00:18:54.716 [2024-07-24 18:06:01.495047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.495083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.504677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e38d0 00:18:54.716 [2024-07-24 18:06:01.505247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.505314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.517365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e0ea0 00:18:54.716 [2024-07-24 18:06:01.518675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.518714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.527839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e9e10 00:18:54.716 [2024-07-24 18:06:01.528977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.529015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.538372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f2948 00:18:54.716 [2024-07-24 18:06:01.539370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.539408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.548535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f31b8 00:18:54.716 [2024-07-24 18:06:01.549388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.549427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:54.716 [2024-07-24 18:06:01.561433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f1430 00:18:54.716 [2024-07-24 18:06:01.562895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.716 [2024-07-24 18:06:01.562933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.570809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eaab8 00:18:54.717 [2024-07-24 18:06:01.572445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.572482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.582486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e9168 00:18:54.717 [2024-07-24 18:06:01.583777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.583817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.591994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f96f8 00:18:54.717 [2024-07-24 18:06:01.593129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.593166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.600647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eee38 00:18:54.717 [2024-07-24 18:06:01.601326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.601362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.611287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ea680 00:18:54.717 [2024-07-24 18:06:01.612114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.612150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.621966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f2d80 00:18:54.717 [2024-07-24 18:06:01.622935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.622971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.632212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eaab8 00:18:54.717 [2024-07-24 18:06:01.632743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.632781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.643884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ddc00 00:18:54.717 [2024-07-24 18:06:01.645137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.645175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.652965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f35f0 00:18:54.717 [2024-07-24 18:06:01.654599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.654635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.663966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190feb58 00:18:54.717 [2024-07-24 18:06:01.664839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.664876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.673270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ddc00 00:18:54.717 [2024-07-24 18:06:01.674300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.674335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:54.717 [2024-07-24 18:06:01.683318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ef270 00:18:54.717 [2024-07-24 18:06:01.684150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.717 [2024-07-24 18:06:01.684192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:54.976 [2024-07-24 18:06:01.695833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fe2e8 00:18:54.976 [2024-07-24 18:06:01.697399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.976 [2024-07-24 18:06:01.697437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:54.976 [2024-07-24 18:06:01.706648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e84c0 00:18:54.976 [2024-07-24 18:06:01.708262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.976 [2024-07-24 18:06:01.708303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:54.976 [2024-07-24 18:06:01.715361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ebb98 00:18:54.976 [2024-07-24 18:06:01.716508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.976 [2024-07-24 18:06:01.716547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:54.976 [2024-07-24 18:06:01.726397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eff18 00:18:54.976 [2024-07-24 18:06:01.727700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.727740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.736472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f2948 00:18:54.977 [2024-07-24 18:06:01.737467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.737536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.748575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e88f8 00:18:54.977 [2024-07-24 18:06:01.750174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.750219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.759238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f35f0 00:18:54.977 [2024-07-24 18:06:01.760844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.760886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.770753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190de470 00:18:54.977 [2024-07-24 18:06:01.772591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.772634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.778587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f2948 00:18:54.977 [2024-07-24 18:06:01.779438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.779477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.789625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ddc00 00:18:54.977 [2024-07-24 18:06:01.790611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.790650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.799972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ecc78 00:18:54.977 [2024-07-24 18:06:01.800562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.800603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.813235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f6cc8 00:18:54.977 [2024-07-24 18:06:01.814990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.815030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.820958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eff18 00:18:54.977 [2024-07-24 18:06:01.821864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.821902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.832388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eee38 00:18:54.977 [2024-07-24 18:06:01.833398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.833437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.843887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ef270 00:18:54.977 [2024-07-24 18:06:01.845116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.845159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.855256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e27f0 00:18:54.977 [2024-07-24 18:06:01.856049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.856092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.866119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f7100 00:18:54.977 [2024-07-24 18:06:01.866810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.866855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.879572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190edd58 00:18:54.977 [2024-07-24 18:06:01.881276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.881315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.891218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e4578 00:18:54.977 [2024-07-24 18:06:01.893055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.893097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.899208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190eff18 00:18:54.977 [2024-07-24 18:06:01.899974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.900016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.913327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e5220 00:18:54.977 [2024-07-24 18:06:01.915130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.915170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.921196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f3e60 00:18:54.977 [2024-07-24 18:06:01.922072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.922110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.932219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f46d0 00:18:54.977 [2024-07-24 18:06:01.933116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.933153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:54.977 [2024-07-24 18:06:01.942853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e95a0 00:18:54.977 [2024-07-24 18:06:01.943704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.977 [2024-07-24 18:06:01.943738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:55.237 [2024-07-24 18:06:01.954060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e4578 00:18:55.237 [2024-07-24 18:06:01.955054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.237 [2024-07-24 18:06:01.955088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:55.237 [2024-07-24 18:06:01.965107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f2948 00:18:55.237 [2024-07-24 18:06:01.966131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.237 [2024-07-24 18:06:01.966167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:55.237 [2024-07-24 18:06:01.976072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f4f40 00:18:55.237 [2024-07-24 18:06:01.977117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.237 [2024-07-24 18:06:01.977155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:55.237 [2024-07-24 18:06:01.987294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e6300 00:18:55.237 [2024-07-24 18:06:01.988338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.237 [2024-07-24 18:06:01.988376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:55.237 [2024-07-24 18:06:01.998863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fb480 00:18:55.237 [2024-07-24 18:06:02.000035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.000073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.010730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ecc78 00:18:55.238 [2024-07-24 18:06:02.012367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.012404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.020933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f1868 00:18:55.238 [2024-07-24 18:06:02.022506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.022540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.028599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ed4e8 00:18:55.238 [2024-07-24 18:06:02.029311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.029344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.040008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fc128 00:18:55.238 [2024-07-24 18:06:02.040888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.040924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.052669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e6fa8 00:18:55.238 [2024-07-24 18:06:02.053978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.054014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.062357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fe720 00:18:55.238 [2024-07-24 18:06:02.063478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.063524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.072514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ec408 00:18:55.238 [2024-07-24 18:06:02.073683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.073715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.083054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fb8b8 00:18:55.238 [2024-07-24 18:06:02.083783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.083820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.093094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f81e0 00:18:55.238 [2024-07-24 18:06:02.094090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.094128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.103266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e5658 00:18:55.238 [2024-07-24 18:06:02.104159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.104194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.114934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ea680 00:18:55.238 [2024-07-24 18:06:02.116332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.116369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.123460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ee190 00:18:55.238 [2024-07-24 18:06:02.124228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.124269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.135765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f9b30 00:18:55.238 [2024-07-24 18:06:02.136700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.136736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.145657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fb8b8 00:18:55.238 [2024-07-24 18:06:02.146410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.146446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.155560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ebb98 00:18:55.238 [2024-07-24 18:06:02.156173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.156208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:55.238 [2024-07-24 18:06:02.166755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f7100 00:18:55.238 [2024-07-24 18:06:02.167518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.238 [2024-07-24 18:06:02.167554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:55.239 [2024-07-24 18:06:02.178368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f3a28 00:18:55.239 [2024-07-24 18:06:02.179269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.239 [2024-07-24 18:06:02.179302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:55.239 [2024-07-24 18:06:02.188803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e7818 00:18:55.239 [2024-07-24 18:06:02.189599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.239 [2024-07-24 18:06:02.189636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:55.239 [2024-07-24 18:06:02.199223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190f7538 00:18:55.239 [2024-07-24 18:06:02.199858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.239 [2024-07-24 18:06:02.199895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:55.239 [2024-07-24 18:06:02.211678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190e6738 00:18:55.498 [2024-07-24 18:06:02.213025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.498 [2024-07-24 18:06:02.213058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:55.498 [2024-07-24 18:06:02.221990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190fdeb0 00:18:55.498 [2024-07-24 18:06:02.223176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.498 [2024-07-24 18:06:02.223263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:55.498 [2024-07-24 18:06:02.231518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190ebb98 00:18:55.498 [2024-07-24 18:06:02.232176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.498 [2024-07-24 18:06:02.232219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:55.498 [2024-07-24 18:06:02.242448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e320) with pdu=0x2000190de470 00:18:55.498 [2024-07-24 18:06:02.243357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.498 [2024-07-24 18:06:02.243395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:55.498 00:18:55.498 Latency(us) 00:18:55.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.498 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.498 nvme0n1 : 2.01 23876.33 93.27 0.00 0.00 5355.36 2137.72 15291.73 00:18:55.498 =================================================================================================================== 00:18:55.498 Total : 23876.33 93.27 0.00 0.00 5355.36 2137.72 15291.73 00:18:55.498 0 00:18:55.498 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:55.498 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:55.498 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:55.498 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:55.498 | .driver_specific 00:18:55.498 | .nvme_error 00:18:55.498 | .status_code 00:18:55.498 | .command_transient_transport_error' 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 187 > 0 )) 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93036 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93036 ']' 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93036 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93036 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:55.757 killing process with pid 93036 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93036' 00:18:55.757 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93036 00:18:55.757 Received shutdown signal, test time was about 2.000000 seconds 00:18:55.757 00:18:55.758 Latency(us) 00:18:55.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.758 =================================================================================================================== 00:18:55.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.758 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93036 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93126 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93126 /var/tmp/bperf.sock 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93126 ']' 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.017 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:56.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:56.017 Zero copy mechanism will not be used. 00:18:56.017 [2024-07-24 18:06:02.856186] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:18:56.017 [2024-07-24 18:06:02.856284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93126 ] 00:18:56.017 [2024-07-24 18:06:02.990225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.276 [2024-07-24 18:06:03.093421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.276 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.276 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:56.276 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:56.276 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:56.548 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:56.548 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.548 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:56.548 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.548 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:56.548 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:56.832 nvme0n1 00:18:56.832 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:56.832 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.832 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:56.832 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.832 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:56.832 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:57.092 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:57.092 Zero copy mechanism will not be used. 00:18:57.092 Running I/O for 2 seconds... 00:18:57.092 [2024-07-24 18:06:03.917743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.918124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.918158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.922335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.922699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.922739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.927011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.927397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.927433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.931705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.932070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.932109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.936318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.936676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.936713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.940996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.941371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.941408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.945710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.946080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.946123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.950376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.950751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.950787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.954982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.955371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.955406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.959555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.959925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-24 18:06:03.959961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.092 [2024-07-24 18:06:03.964185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.092 [2024-07-24 18:06:03.964572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:03.964618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:03.968701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:03.969047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:03.969081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:03.973183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:03.973525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:03.973558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:03.977774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:03.978115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:03.978167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:03.982402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:03.982748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:03.982798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:03.987029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:03.987391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:03.987425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:03.991555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:03.991888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:03.991920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:03.996007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:03.996353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:03.996385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.000559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.000930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.000965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.005088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.005439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.005461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.009589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.009940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.009989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.014190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.014548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.014587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.018697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.019061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.019095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.023292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.023683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.023717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.029284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.029638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.029673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.033912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.034284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.034323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.038406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.038772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.038805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.043006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.043383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.043425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.047676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.048022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.048059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.052152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.052513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.052544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.056550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.056893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.056927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.061014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.061388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.061437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.093 [2024-07-24 18:06:04.065503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.093 [2024-07-24 18:06:04.065858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-24 18:06:04.065897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.353 [2024-07-24 18:06:04.069954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.353 [2024-07-24 18:06:04.070333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.353 [2024-07-24 18:06:04.070377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.353 [2024-07-24 18:06:04.074427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.353 [2024-07-24 18:06:04.074784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.353 [2024-07-24 18:06:04.074826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.353 [2024-07-24 18:06:04.078881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.353 [2024-07-24 18:06:04.079214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.353 [2024-07-24 18:06:04.079254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.353 [2024-07-24 18:06:04.083304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.353 [2024-07-24 18:06:04.083665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.353 [2024-07-24 18:06:04.083698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.087800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.088168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.088206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.092217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.092574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.092609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.096717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.097060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.097091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.101151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.101518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.101556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.105595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.105925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.105960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.109979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.110312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.110337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.114288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.114619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.114649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.118574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.118866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.118910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.122851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.123162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.123204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.127282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.127633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.127655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.131623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.131960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.132002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.136067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.136434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.136473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.140758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.141102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.141141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.145280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.145621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.145665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.149864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.150188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.150224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.154372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.154713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.154740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.158976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.159319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.159341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.163519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.163866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.163897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.167945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.168271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.168293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.172390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.172694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.172725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.176897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.177210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.177262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.181445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.181770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.181801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.186000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.186354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.186392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.190514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.190852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.190884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.195042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.195381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.195422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.199772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.354 [2024-07-24 18:06:04.200115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.354 [2024-07-24 18:06:04.200147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.354 [2024-07-24 18:06:04.204326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.204675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.204706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.208872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.209212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.209253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.213440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.213758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.213789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.217947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.218260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.218305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.222358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.222662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.222693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.226789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.227091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.227132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.231269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.231605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.231628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.235704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.236028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.236064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.240187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.240521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.240554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.244655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.244964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.244985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.249134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.249497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.249529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.253703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.253999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.254021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.258213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.258525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.258564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.262668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.263002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.263049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.267207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.267555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.267593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.271826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.272135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.272171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.276203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.276519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.276549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.280572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.280879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.280910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.284891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.285207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.285234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.289341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.289665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.289703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.293842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.294180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.294213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.298399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.298739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.298775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.303044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.303397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.303436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.307668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.307998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.308035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.312263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.312574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.312610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.316823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.317142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.317178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.355 [2024-07-24 18:06:04.321394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.355 [2024-07-24 18:06:04.321715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.355 [2024-07-24 18:06:04.321752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.356 [2024-07-24 18:06:04.325910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.356 [2024-07-24 18:06:04.326222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.356 [2024-07-24 18:06:04.326271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.330420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.330733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.330770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.334910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.335239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.335282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.339391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.339731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.339765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.343941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.344277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.344313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.348405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.348724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.348760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.352963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.353309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.353342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.357469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.357821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.357853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.362034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.362398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.362433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.366678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.367014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.367053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.371199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.371564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.371602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.375832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.376175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.376213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.380472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.380818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.380857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.384970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.385311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.385347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.389413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.389739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.389775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.393871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.394186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.394222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.398304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.398613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.398650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.402750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.403066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.403104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.407348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.407703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.407741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.411905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.412250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.412300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.416346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.416673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.416710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.420964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.421317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.421349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.425504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.616 [2024-07-24 18:06:04.425847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.616 [2024-07-24 18:06:04.425885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.616 [2024-07-24 18:06:04.430040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.430408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.430445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.434560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.434879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.434914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.438938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.439298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.439331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.443388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.443748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.443786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.447896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.448231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.448279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.452431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.452759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.452795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.456827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.457176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.457215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.461302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.461636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.461671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.465718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.466053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.466091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.470132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.470483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.470520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.474609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.474923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.474960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.478953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.479287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.479324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.483356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.483695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.483727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.487820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.488134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.488171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.492340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.492672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.492709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.496753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.497086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.497123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.501289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.501606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.501647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.505842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.506181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.506213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.510387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.510725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.510764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.514966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.515313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.515349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.519538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.519871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.519908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.524121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.524461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.524498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.528682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.528999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.529036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.533199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.533534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.533571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.537718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.538049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.538087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.542338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.542662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.542700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.546885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.617 [2024-07-24 18:06:04.547226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.617 [2024-07-24 18:06:04.547282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.617 [2024-07-24 18:06:04.551498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.551820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.551857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.618 [2024-07-24 18:06:04.556044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.556388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.556425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.618 [2024-07-24 18:06:04.560595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.560940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.560974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.618 [2024-07-24 18:06:04.565138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.565478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.565509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.618 [2024-07-24 18:06:04.569727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.570045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.570082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.618 [2024-07-24 18:06:04.574350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.574677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.574713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.618 [2024-07-24 18:06:04.578920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.579243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.579282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.618 [2024-07-24 18:06:04.583438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.583793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.583825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.618 [2024-07-24 18:06:04.588041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.618 [2024-07-24 18:06:04.588401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.618 [2024-07-24 18:06:04.588438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.592682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.593039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.593087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.597458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.597811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.597855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.602066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.602423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.602462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.606708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.607054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.607093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.611381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.611726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.611764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.615987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.616341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.616379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.620535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.620880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.620918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.625135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.625505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.625539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.629785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.630151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.630184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.634445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.634769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.634803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.639048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.639423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.639456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.878 [2024-07-24 18:06:04.643589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.878 [2024-07-24 18:06:04.643923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.878 [2024-07-24 18:06:04.643960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.647972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.648324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.648381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.652448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.652786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.652824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.656949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.657305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.657375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.661565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.661907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.661939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.666142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.666514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.666548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.670668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.671034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.671089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.675278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.675650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.675682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.679873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.680205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.680253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.684412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.684772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.684811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.688944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.689312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.689363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.693448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.693782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.693813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.697946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.698285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.698316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.702408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.702743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.702774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.706819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.707154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.707185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.711213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.711569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.711601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.715599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.715929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.715968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.719944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.720278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.720313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.724281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.724582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.724619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.728644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.728960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.728991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.733075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.733411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.733448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.737544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.737872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.737909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.741931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.742237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.742281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.746301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.746600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.746637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.750660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.750963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.751000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.755021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.755346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.755376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.759327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.879 [2024-07-24 18:06:04.759652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.879 [2024-07-24 18:06:04.759736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.879 [2024-07-24 18:06:04.763754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.764065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.764101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.768156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.768472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.768503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.772615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.772921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.772958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.777001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.777324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.777360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.781368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.781658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.781695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.785764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.786063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.786099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.790090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.790409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.790445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.794452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.794763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.794801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.798870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.799206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.799253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.803441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.803781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.803818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.807902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.808215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.808260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.812318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.812632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.812669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.816730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.817076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.817115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.821177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.821526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.821564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.825696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.826033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.826072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.830265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.830622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.830662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.834874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.835220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.835262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.839478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.839845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.839886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.844045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.844407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.844446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.880 [2024-07-24 18:06:04.848620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:57.880 [2024-07-24 18:06:04.848953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.880 [2024-07-24 18:06:04.848991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.853133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.853470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.853507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.857546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.857910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.857946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.862595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.862964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.862996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.867069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.867433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.867469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.871570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.871912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.871949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.875966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.876321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.876359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.880483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.880821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.880859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.884954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.885287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.885326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.889303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.889602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.889639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.893542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.893893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.140 [2024-07-24 18:06:04.893930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.140 [2024-07-24 18:06:04.897813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.140 [2024-07-24 18:06:04.898099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.898134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.902001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.902340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.902375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.906273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.906572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.906607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.910416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.910725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.910761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.914645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.914951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.914986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.918868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.919169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.919200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.923003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.923325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.923359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.927287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.927615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.927650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.931810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.932154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.932191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.936144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.936474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.936524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.940564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.940855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.940892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.944844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.945174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.945212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.949235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.949538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.949574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.953436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.953754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.953790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.957634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.957918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.957954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.961770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.962078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.962115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.966010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.966324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.966360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.970337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.970666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.970703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.974651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.974970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.975008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.979039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.979378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.979414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.983408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.983735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.983774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.987821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.988128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.988164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.992140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.992489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.992521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:04.996489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:04.996791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:04.996828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:05.000909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:05.001244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:05.001287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:05.005331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:05.005624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:05.005654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:05.009623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.141 [2024-07-24 18:06:05.009928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.141 [2024-07-24 18:06:05.009960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.141 [2024-07-24 18:06:05.013919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.014192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.014226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.018161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.018472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.018504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.022368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.022669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.022700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.026682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.026988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.027026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.030969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.031247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.031287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.035168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.035501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.035539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.039446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.039735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.039784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.043680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.043968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.044004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.047951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.048243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.048290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.052137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.052415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.052463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.056263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.056533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.056569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.060477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.060779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.060815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.064692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.064985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.065017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.069009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.069296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.069327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.073335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.073623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.073652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.077470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.077736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.077765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.081660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.081953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.081997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.085942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.086232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.086270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.090201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.090479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.090509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.094467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.094740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.094770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.098736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.099053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.099087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.103004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.103309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.103341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.107285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.107591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.107622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.142 [2024-07-24 18:06:05.111666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.142 [2024-07-24 18:06:05.111961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.142 [2024-07-24 18:06:05.111992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.402 [2024-07-24 18:06:05.115920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.402 [2024-07-24 18:06:05.116210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.402 [2024-07-24 18:06:05.116254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.402 [2024-07-24 18:06:05.120196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.402 [2024-07-24 18:06:05.120506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.402 [2024-07-24 18:06:05.120543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.402 [2024-07-24 18:06:05.124455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.402 [2024-07-24 18:06:05.124762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.402 [2024-07-24 18:06:05.124798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.402 [2024-07-24 18:06:05.128737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.402 [2024-07-24 18:06:05.129029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.129067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.133021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.133307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.133343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.137267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.137554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.137590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.141524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.141809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.141842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.145724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.146039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.146071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.149920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.150188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.150224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.154147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.154458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.154491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.158425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.158705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.158735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.162676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.162959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.162991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.166925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.167230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.167280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.171163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.171472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.171524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.175429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.175732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.175765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.179677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.179954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.179990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.183943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.184270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.184298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.188347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.188637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.188671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.192661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.192959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.192999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.196950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.197222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.197275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.201192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.201523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.201562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.205178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.205478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.205511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.209239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.209553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.209590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.213304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.213578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.213615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.217158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.217486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.217525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.221156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.221450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.221486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.225132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.225462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.225490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.229078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.229381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.229423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.403 [2024-07-24 18:06:05.233116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.403 [2024-07-24 18:06:05.233437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.403 [2024-07-24 18:06:05.233469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.237080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.237366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.237390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.241069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.241375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.241407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.245031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.245318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.245356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.249144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.249443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.249476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.253068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.253336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.253367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.256978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.257238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.257302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.260889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.261140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.261189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.264842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.265105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.265136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.268824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.269067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.269116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.272761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.273040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.273071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.276743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.277000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.277031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.280715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.280973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.281004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.284757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.285026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.285063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.288751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.289014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.289036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.292771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.293036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.293067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.296771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.297030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.297060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.300854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.301121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.301158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.304834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.305079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.305114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.308747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.309005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.309040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.312755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.313014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.313052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.316787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.317053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.317090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.320830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.321099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.321135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.324849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.325107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.325139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.328895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.329151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.329188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.332900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.333152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.333183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.336913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.404 [2024-07-24 18:06:05.337160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.404 [2024-07-24 18:06:05.337191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.404 [2024-07-24 18:06:05.340953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.341207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.341238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.405 [2024-07-24 18:06:05.344977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.345236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.345278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.405 [2024-07-24 18:06:05.349066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.349340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.349369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.405 [2024-07-24 18:06:05.353090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.353363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.353394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.405 [2024-07-24 18:06:05.357086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.357351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.357382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.405 [2024-07-24 18:06:05.361135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.361396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.361426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.405 [2024-07-24 18:06:05.365092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.365373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.365402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.405 [2024-07-24 18:06:05.369098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.369379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.369409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.405 [2024-07-24 18:06:05.373103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.405 [2024-07-24 18:06:05.373377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.405 [2024-07-24 18:06:05.373410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.377138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.377415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.377447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.381114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.381372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.381402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.385170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.385453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.385483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.389132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.389405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.389452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.393175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.393484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.393521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.397278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.397566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.397600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.401346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.401612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.401642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.405439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.405697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.405732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.409488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.409738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.409773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.413504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.413771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.413806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.417553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.417825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.417867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.421599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.421847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.421883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.425553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.425799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.425828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.429510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.429776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.429807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.433591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.433847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.433878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.437599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.437855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.437885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.441607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.441861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.441892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.445651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.445902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.445937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.449705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.449952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.449993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.665 [2024-07-24 18:06:05.453752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.665 [2024-07-24 18:06:05.454008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.665 [2024-07-24 18:06:05.454044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.457739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.457988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.458022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.461726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.461986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.462018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.465724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.465973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.466004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.469682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.469926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.469960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.473650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.473907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.473938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.477570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.477818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.477849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.481503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.481750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.481780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.485388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.485627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.485656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.489312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.489552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.489581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.493327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.493607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.493637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.497305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.497548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.497578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.501270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.501528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.501558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.505091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.505369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.505399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.509110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.509388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.509422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.513080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.513395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.513426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.517136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.517417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.517442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.521169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.521459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.521491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.525147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.525419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.525450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.529157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.529444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.529475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.533121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.533407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.533449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.537202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.537467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.537506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.541291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.541542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.541574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.545318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.545574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.545610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.549366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.549622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.549651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.553374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.553623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.553658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.557419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.557689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.666 [2024-07-24 18:06:05.557718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.666 [2024-07-24 18:06:05.561462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.666 [2024-07-24 18:06:05.561730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.561770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.565587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.565855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.565883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.569491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.569761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.569788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.573476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.573742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.573772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.577475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.577717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.577746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.581459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.581734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.581765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.587327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.588522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.588609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.593689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.594113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.594194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.599371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.599918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.599994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.604578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.604926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.604989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.609450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.609777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.609845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.614213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.614433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.614483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.619064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.619211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.619262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.623901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.624136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.624190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.628726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.628916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.628967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.633993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.634138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.634190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.667 [2024-07-24 18:06:05.638991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.667 [2024-07-24 18:06:05.639113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.667 [2024-07-24 18:06:05.639148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.926 [2024-07-24 18:06:05.643998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.926 [2024-07-24 18:06:05.644136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.926 [2024-07-24 18:06:05.644187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.926 [2024-07-24 18:06:05.648958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.926 [2024-07-24 18:06:05.649071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.926 [2024-07-24 18:06:05.649108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.926 [2024-07-24 18:06:05.653707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.926 [2024-07-24 18:06:05.653837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.926 [2024-07-24 18:06:05.653874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.926 [2024-07-24 18:06:05.658362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.926 [2024-07-24 18:06:05.658508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.926 [2024-07-24 18:06:05.658560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.926 [2024-07-24 18:06:05.663068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.926 [2024-07-24 18:06:05.663296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.926 [2024-07-24 18:06:05.663343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.926 [2024-07-24 18:06:05.667801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.926 [2024-07-24 18:06:05.667951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.926 [2024-07-24 18:06:05.667987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.926 [2024-07-24 18:06:05.672429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.926 [2024-07-24 18:06:05.672559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.926 [2024-07-24 18:06:05.672592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.926 [2024-07-24 18:06:05.677069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.926 [2024-07-24 18:06:05.677220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.677279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.681800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.681934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.681974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.686552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.686767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.686818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.691289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.691479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.691530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.696032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.696286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.696322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.700665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.700904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.700940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.705355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.705488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.705525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.710055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.710268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.710315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.714551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.714927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.714986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.719803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.720172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.720237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.724687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.724946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.724983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.729353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.729704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.729754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.734087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.734268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.734333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.739050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.739328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.739387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.743877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.744266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.744330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.748618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.748784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.748844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.753427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.753623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.753681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.758333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.758580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.758635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.763244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.763463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.763547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.768125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.768332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.768376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.772875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.773071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.773125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.777659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.777855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.777897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.782425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.782638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.782680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.787139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.787369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.787412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.792073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.792294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.792333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.796914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.797113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.797150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.801753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.801962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.927 [2024-07-24 18:06:05.802003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.927 [2024-07-24 18:06:05.806482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.927 [2024-07-24 18:06:05.806665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.806707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.811145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.811311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.811352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.816077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.816482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.816548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.820861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.821052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.821097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.825728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.826051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.826114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.830519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.830655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.830697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.835387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.835568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.835610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.840340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.840639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.840704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.845140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.845452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.845513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.849833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.849995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.850041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.854766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.854980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.855026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.859679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.859882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.859928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.864563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.864792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.864838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.869421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.869627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.869688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.874314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.874518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.874564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.879122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.879396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.879454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.883828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.884203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.884284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.888519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.888696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.888740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.893498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.893640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.893685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.928 [2024-07-24 18:06:05.899562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:58.928 [2024-07-24 18:06:05.899838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.928 [2024-07-24 18:06:05.899920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.187 [2024-07-24 18:06:05.905860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4e4c0) with pdu=0x2000190fef90 00:18:59.187 [2024-07-24 18:06:05.906057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.187 [2024-07-24 18:06:05.906108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.187 00:18:59.187 Latency(us) 00:18:59.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.187 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:59.187 nvme0n1 : 2.00 6972.07 871.51 0.00 0.00 2290.37 1739.82 10797.84 00:18:59.187 =================================================================================================================== 00:18:59.187 Total : 6972.07 871.51 0.00 0.00 2290.37 1739.82 10797.84 00:18:59.187 0 00:18:59.187 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:59.187 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:59.187 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:59.187 | .driver_specific 00:18:59.187 | .nvme_error 00:18:59.187 | .status_code 00:18:59.187 | .command_transient_transport_error' 00:18:59.187 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 450 > 0 )) 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93126 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93126 ']' 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93126 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93126 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:59.446 killing process with pid 93126 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93126' 00:18:59.446 Received shutdown signal, test time was about 2.000000 seconds 00:18:59.446 00:18:59.446 Latency(us) 00:18:59.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.446 =================================================================================================================== 00:18:59.446 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93126 00:18:59.446 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93126 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 92804 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92804 ']' 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92804 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92804 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.704 killing process with pid 92804 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92804' 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92804 00:18:59.704 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92804 00:18:59.962 00:18:59.962 real 0m18.754s 00:18:59.962 user 0m35.737s 00:18:59.962 sys 0m5.324s 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.962 ************************************ 00:18:59.962 END TEST nvmf_digest_error 00:18:59.962 ************************************ 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.962 rmmod nvme_tcp 00:18:59.962 rmmod nvme_fabrics 00:18:59.962 rmmod nvme_keyring 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 92804 ']' 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 92804 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 92804 ']' 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 92804 00:18:59.962 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92804) - No such process 00:18:59.962 Process with pid 92804 is not found 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 92804 is not found' 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:59.962 00:18:59.962 real 0m38.145s 00:18:59.962 user 1m11.113s 00:18:59.962 sys 0m10.858s 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.962 ************************************ 00:18:59.962 END TEST nvmf_digest 00:18:59.962 ************************************ 00:18:59.962 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:00.222 18:06:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:19:00.222 18:06:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:19:00.222 18:06:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:00.222 18:06:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:00.222 18:06:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:00.222 18:06:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.222 ************************************ 00:19:00.222 START TEST nvmf_mdns_discovery 00:19:00.222 ************************************ 00:19:00.222 18:06:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:00.222 * Looking for test storage... 00:19:00.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:00.222 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:00.223 Cannot find device "nvmf_tgt_br" 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.223 Cannot find device "nvmf_tgt_br2" 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:00.223 Cannot find device "nvmf_tgt_br" 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:00.223 Cannot find device "nvmf_tgt_br2" 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:00.223 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:00.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:19:00.481 00:19:00.481 --- 10.0.0.2 ping statistics --- 00:19:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.481 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:00.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:19:00.481 00:19:00.481 --- 10.0.0.3 ping statistics --- 00:19:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.481 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:00.481 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:19:00.481 00:19:00.481 --- 10.0.0.1 ping statistics --- 00:19:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.481 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:00.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93409 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93409 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 93409 ']' 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.482 18:06:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:00.812 [2024-07-24 18:06:07.495790] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:19:00.812 [2024-07-24 18:06:07.496067] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.812 [2024-07-24 18:06:07.631597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.812 [2024-07-24 18:06:07.754677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.812 [2024-07-24 18:06:07.754935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.812 [2024-07-24 18:06:07.755062] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.812 [2024-07-24 18:06:07.755132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.812 [2024-07-24 18:06:07.755172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.812 [2024-07-24 18:06:07.755254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.747 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:01.747 [2024-07-24 18:06:08.721177] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.006 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.006 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:02.006 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:02.007 [2024-07-24 18:06:08.729352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:02.007 null0 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:02.007 null1 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:02.007 null2 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:02.007 null3 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:02.007 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=93464 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 93464 /tmp/host.sock 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 93464 ']' 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:02.007 18:06:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:02.007 [2024-07-24 18:06:08.826669] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:19:02.007 [2024-07-24 18:06:08.827037] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93464 ] 00:19:02.007 [2024-07-24 18:06:08.965683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.265 [2024-07-24 18:06:09.086719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.237 18:06:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.237 18:06:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:03.237 18:06:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:19:03.237 18:06:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:19:03.237 18:06:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:19:03.237 18:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=93493 00:19:03.237 18:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:19:03.237 18:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:19:03.237 18:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:19:03.237 Process 971 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:19:03.237 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:19:03.237 Successfully dropped root privileges. 00:19:03.237 avahi-daemon 0.8 starting up. 00:19:03.237 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:19:03.237 Successfully called chroot(). 00:19:03.237 Successfully dropped remaining capabilities. 00:19:04.170 No service file found in /etc/avahi/services. 00:19:04.170 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:04.170 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:19:04.170 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:04.170 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:19:04.170 Network interface enumeration completed. 00:19:04.170 Registering new address record for fe80::64d1:4dff:fe8d:cc31 on nvmf_tgt_if2.*. 00:19:04.170 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:19:04.170 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:19:04.170 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:19:04.170 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2192254656. 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:04.170 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.427 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:19:04.427 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 [2024-07-24 18:06:11.355470] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 [2024-07-24 18:06:11.361994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.428 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.686 [2024-07-24 18:06:11.410005] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.686 [2024-07-24 18:06:11.418000] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.686 18:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:19:05.301 [2024-07-24 18:06:12.255474] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:06.236 [2024-07-24 18:06:12.855526] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:06.236 [2024-07-24 18:06:12.855586] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:06.236 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:06.236 cookie is 0 00:19:06.236 is_local: 1 00:19:06.236 our_own: 0 00:19:06.236 wide_area: 0 00:19:06.236 multicast: 1 00:19:06.236 cached: 1 00:19:06.236 [2024-07-24 18:06:12.955493] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:06.236 [2024-07-24 18:06:12.955561] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:06.236 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:06.236 cookie is 0 00:19:06.236 is_local: 1 00:19:06.236 our_own: 0 00:19:06.236 wide_area: 0 00:19:06.236 multicast: 1 00:19:06.236 cached: 1 00:19:06.236 [2024-07-24 18:06:12.955577] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:06.236 [2024-07-24 18:06:13.055506] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:06.236 [2024-07-24 18:06:13.055556] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:06.236 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:06.236 cookie is 0 00:19:06.236 is_local: 1 00:19:06.236 our_own: 0 00:19:06.236 wide_area: 0 00:19:06.236 multicast: 1 00:19:06.236 cached: 1 00:19:06.236 [2024-07-24 18:06:13.155500] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:06.236 [2024-07-24 18:06:13.155565] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:06.236 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:06.236 cookie is 0 00:19:06.236 is_local: 1 00:19:06.236 our_own: 0 00:19:06.236 wide_area: 0 00:19:06.236 multicast: 1 00:19:06.236 cached: 1 00:19:06.236 [2024-07-24 18:06:13.155582] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:07.172 [2024-07-24 18:06:13.866717] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:07.172 [2024-07-24 18:06:13.866766] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:07.172 [2024-07-24 18:06:13.866782] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:07.172 [2024-07-24 18:06:13.952843] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:19:07.172 [2024-07-24 18:06:14.010215] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:07.172 [2024-07-24 18:06:14.010263] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:07.172 [2024-07-24 18:06:14.066459] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:07.172 [2024-07-24 18:06:14.066498] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:07.172 [2024-07-24 18:06:14.066514] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:07.430 [2024-07-24 18:06:14.152586] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:19:07.430 [2024-07-24 18:06:14.208821] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:07.430 [2024-07-24 18:06:14.208868] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.961 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.962 18:06:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:19:10.894 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:19:10.894 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.894 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:10.894 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.894 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.894 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:10.894 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.152 [2024-07-24 18:06:17.937902] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:11.152 [2024-07-24 18:06:17.939096] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:11.152 [2024-07-24 18:06:17.939132] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:11.152 [2024-07-24 18:06:17.939168] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:11.152 [2024-07-24 18:06:17.939181] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.152 [2024-07-24 18:06:17.949905] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:11.152 [2024-07-24 18:06:17.951092] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:11.152 [2024-07-24 18:06:17.951143] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.152 18:06:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:19:11.152 [2024-07-24 18:06:18.082197] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:19:11.152 [2024-07-24 18:06:18.082425] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:19:11.411 [2024-07-24 18:06:18.140633] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:11.411 [2024-07-24 18:06:18.140672] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:11.411 [2024-07-24 18:06:18.140680] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:11.411 [2024-07-24 18:06:18.140701] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:11.411 [2024-07-24 18:06:18.141477] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:11.411 [2024-07-24 18:06:18.141488] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:11.411 [2024-07-24 18:06:18.141494] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:11.411 [2024-07-24 18:06:18.141509] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:11.411 [2024-07-24 18:06:18.187348] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:11.411 [2024-07-24 18:06:18.187384] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:11.411 [2024-07-24 18:06:18.187428] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:11.411 [2024-07-24 18:06:18.187436] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:12.347 18:06:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:19:12.347 18:06:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.347 18:06:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:12.347 18:06:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.347 18:06:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.347 18:06:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:12.347 18:06:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:12.347 18:06:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.347 [2024-07-24 18:06:19.258910] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:12.347 [2024-07-24 18:06:19.258946] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:12.347 [2024-07-24 18:06:19.258978] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:12.347 [2024-07-24 18:06:19.258990] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.347 [2024-07-24 18:06:19.266908] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:12.347 [2024-07-24 18:06:19.266953] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:12.347 [2024-07-24 18:06:19.268140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.347 [2024-07-24 18:06:19.268182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.347 [2024-07-24 18:06:19.268197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.347 [2024-07-24 18:06:19.268208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.347 [2024-07-24 18:06:19.268219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.347 [2024-07-24 18:06:19.268229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.347 [2024-07-24 18:06:19.268249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.347 [2024-07-24 18:06:19.268260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.347 [2024-07-24 18:06:19.268270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.347 [2024-07-24 18:06:19.268322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.347 [2024-07-24 18:06:19.268334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.347 [2024-07-24 18:06:19.268345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.347 [2024-07-24 18:06:19.268356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.347 [2024-07-24 18:06:19.268367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.347 [2024-07-24 18:06:19.268377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.347 [2024-07-24 18:06:19.268389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.347 [2024-07-24 18:06:19.268399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.347 [2024-07-24 18:06:19.268409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.347 18:06:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:19:12.347 [2024-07-24 18:06:19.278105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.347 [2024-07-24 18:06:19.278149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.347 [2024-07-24 18:06:19.288176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.347 [2024-07-24 18:06:19.288218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.347 [2024-07-24 18:06:19.288351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.347 [2024-07-24 18:06:19.288372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.348 [2024-07-24 18:06:19.288384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.348 [2024-07-24 18:06:19.288426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.348 [2024-07-24 18:06:19.288440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.348 [2024-07-24 18:06:19.288450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.348 [2024-07-24 18:06:19.288466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.348 [2024-07-24 18:06:19.288481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.348 [2024-07-24 18:06:19.288505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.348 [2024-07-24 18:06:19.288516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.348 [2024-07-24 18:06:19.288528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.348 [2024-07-24 18:06:19.288541] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.348 [2024-07-24 18:06:19.288550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.348 [2024-07-24 18:06:19.288560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.348 [2024-07-24 18:06:19.288573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.348 [2024-07-24 18:06:19.288583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.348 [2024-07-24 18:06:19.298263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.348 [2024-07-24 18:06:19.298352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.348 [2024-07-24 18:06:19.298370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.348 [2024-07-24 18:06:19.298382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.348 [2024-07-24 18:06:19.298395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.348 [2024-07-24 18:06:19.298416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.348 [2024-07-24 18:06:19.298459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.348 [2024-07-24 18:06:19.298473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.348 [2024-07-24 18:06:19.298483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.348 [2024-07-24 18:06:19.298494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.348 [2024-07-24 18:06:19.298503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.348 [2024-07-24 18:06:19.298514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.348 [2024-07-24 18:06:19.298527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.348 [2024-07-24 18:06:19.298538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.348 [2024-07-24 18:06:19.298552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.348 [2024-07-24 18:06:19.298562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.348 [2024-07-24 18:06:19.298572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.348 [2024-07-24 18:06:19.298584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.348 [2024-07-24 18:06:19.308317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.348 [2024-07-24 18:06:19.308395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.348 [2024-07-24 18:06:19.308413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.348 [2024-07-24 18:06:19.308424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.348 [2024-07-24 18:06:19.308438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.348 [2024-07-24 18:06:19.308461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.348 [2024-07-24 18:06:19.308471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.348 [2024-07-24 18:06:19.308482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.348 [2024-07-24 18:06:19.308495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.348 [2024-07-24 18:06:19.308507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.348 [2024-07-24 18:06:19.308561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.348 [2024-07-24 18:06:19.308575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.348 [2024-07-24 18:06:19.308586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.348 [2024-07-24 18:06:19.308599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.348 [2024-07-24 18:06:19.308613] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.348 [2024-07-24 18:06:19.308624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.348 [2024-07-24 18:06:19.308634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.348 [2024-07-24 18:06:19.308646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.348 [2024-07-24 18:06:19.318370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.348 [2024-07-24 18:06:19.318462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.348 [2024-07-24 18:06:19.318480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.348 [2024-07-24 18:06:19.318491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.348 [2024-07-24 18:06:19.318506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.348 [2024-07-24 18:06:19.318521] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.348 [2024-07-24 18:06:19.318531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.348 [2024-07-24 18:06:19.318542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.348 [2024-07-24 18:06:19.318563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.348 [2024-07-24 18:06:19.318577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.348 [2024-07-24 18:06:19.318626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.348 [2024-07-24 18:06:19.318641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.348 [2024-07-24 18:06:19.318651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.348 [2024-07-24 18:06:19.318665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.348 [2024-07-24 18:06:19.318679] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.348 [2024-07-24 18:06:19.318689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.348 [2024-07-24 18:06:19.318699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.348 [2024-07-24 18:06:19.318711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.607 [2024-07-24 18:06:19.328428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.607 [2024-07-24 18:06:19.328503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.607 [2024-07-24 18:06:19.328520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.607 [2024-07-24 18:06:19.328531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.607 [2024-07-24 18:06:19.328545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.607 [2024-07-24 18:06:19.328568] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.607 [2024-07-24 18:06:19.328578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.607 [2024-07-24 18:06:19.328589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.607 [2024-07-24 18:06:19.328602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.607 [2024-07-24 18:06:19.328624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.607 [2024-07-24 18:06:19.328672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.607 [2024-07-24 18:06:19.328686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.607 [2024-07-24 18:06:19.328696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.607 [2024-07-24 18:06:19.328710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.607 [2024-07-24 18:06:19.328724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.607 [2024-07-24 18:06:19.328734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.607 [2024-07-24 18:06:19.328744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.607 [2024-07-24 18:06:19.328756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.607 [2024-07-24 18:06:19.338475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.607 [2024-07-24 18:06:19.338549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.607 [2024-07-24 18:06:19.338565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.607 [2024-07-24 18:06:19.338576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.607 [2024-07-24 18:06:19.338590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.607 [2024-07-24 18:06:19.338604] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.607 [2024-07-24 18:06:19.338614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.607 [2024-07-24 18:06:19.338624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.607 [2024-07-24 18:06:19.338639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.607 [2024-07-24 18:06:19.338662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.607 [2024-07-24 18:06:19.338710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.607 [2024-07-24 18:06:19.338725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.607 [2024-07-24 18:06:19.338735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.607 [2024-07-24 18:06:19.338749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.607 [2024-07-24 18:06:19.338763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.607 [2024-07-24 18:06:19.338772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.607 [2024-07-24 18:06:19.338782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.607 [2024-07-24 18:06:19.338795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.607 [2024-07-24 18:06:19.348524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.607 [2024-07-24 18:06:19.348612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.607 [2024-07-24 18:06:19.348630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.607 [2024-07-24 18:06:19.348641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.607 [2024-07-24 18:06:19.348664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.607 [2024-07-24 18:06:19.348679] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.607 [2024-07-24 18:06:19.348689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.607 [2024-07-24 18:06:19.348699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.607 [2024-07-24 18:06:19.348712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.607 [2024-07-24 18:06:19.348734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.607 [2024-07-24 18:06:19.348783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.607 [2024-07-24 18:06:19.348798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.607 [2024-07-24 18:06:19.348808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.348822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.348836] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.348845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.348855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.608 [2024-07-24 18:06:19.348868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.358579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.608 [2024-07-24 18:06:19.358669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.608 [2024-07-24 18:06:19.358687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.608 [2024-07-24 18:06:19.358698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.358714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.358728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.358738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.358748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.608 [2024-07-24 18:06:19.358763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.358786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.608 [2024-07-24 18:06:19.358835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.608 [2024-07-24 18:06:19.358850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.608 [2024-07-24 18:06:19.358860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.358874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.358888] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.358898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.358908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.608 [2024-07-24 18:06:19.358921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.368639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.608 [2024-07-24 18:06:19.368712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.608 [2024-07-24 18:06:19.368729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.608 [2024-07-24 18:06:19.368740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.368754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.368768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.368777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.368788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.608 [2024-07-24 18:06:19.368802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.368825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.608 [2024-07-24 18:06:19.368873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.608 [2024-07-24 18:06:19.368888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.608 [2024-07-24 18:06:19.368898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.368912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.368926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.368936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.368946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.608 [2024-07-24 18:06:19.368958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.378685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.608 [2024-07-24 18:06:19.378755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.608 [2024-07-24 18:06:19.378770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.608 [2024-07-24 18:06:19.378781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.378796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.378810] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.378819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.378830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.608 [2024-07-24 18:06:19.378842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.378868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.608 [2024-07-24 18:06:19.378915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.608 [2024-07-24 18:06:19.378929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.608 [2024-07-24 18:06:19.378939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.378953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.378967] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.378977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.378987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.608 [2024-07-24 18:06:19.378999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.388732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.608 [2024-07-24 18:06:19.388807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.608 [2024-07-24 18:06:19.388824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc970 with addr=10.0.0.2, port=4420 00:19:12.608 [2024-07-24 18:06:19.388835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cc970 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.388850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cc970 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.388864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.388874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.388884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.608 [2024-07-24 18:06:19.388898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.388920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:12.608 [2024-07-24 18:06:19.388968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.608 [2024-07-24 18:06:19.388983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ab380 with addr=10.0.0.3, port=4420 00:19:12.608 [2024-07-24 18:06:19.388993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab380 is same with the state(5) to be set 00:19:12.608 [2024-07-24 18:06:19.389007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab380 (9): Bad file descriptor 00:19:12.608 [2024-07-24 18:06:19.389021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:12.608 [2024-07-24 18:06:19.389031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:12.608 [2024-07-24 18:06:19.389041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:12.608 [2024-07-24 18:06:19.389068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.608 [2024-07-24 18:06:19.398062] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:19:12.608 [2024-07-24 18:06:19.398087] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:12.608 [2024-07-24 18:06:19.398118] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:12.608 [2024-07-24 18:06:19.398150] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:12.608 [2024-07-24 18:06:19.398164] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:12.608 [2024-07-24 18:06:19.398177] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:12.608 [2024-07-24 18:06:19.484186] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:12.608 [2024-07-24 18:06:19.484280] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.545 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.804 18:06:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:19:13.804 [2024-07-24 18:06:20.655606] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.765 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.025 [2024-07-24 18:06:21.865404] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:19:15.025 2024/07/24 18:06:21 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:15.025 request: 00:19:15.025 { 00:19:15.025 "method": "bdev_nvme_start_mdns_discovery", 00:19:15.025 "params": { 00:19:15.025 "name": "mdns", 00:19:15.025 "svcname": "_nvme-disc._http", 00:19:15.025 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:15.025 } 00:19:15.025 } 00:19:15.025 Got JSON-RPC error response 00:19:15.025 GoRPCClient: error on JSON-RPC call 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.025 18:06:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:19:15.593 [2024-07-24 18:06:22.450170] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:15.593 [2024-07-24 18:06:22.550148] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:15.851 [2024-07-24 18:06:22.650161] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:15.851 [2024-07-24 18:06:22.650195] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:15.851 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:15.851 cookie is 0 00:19:15.851 is_local: 1 00:19:15.851 our_own: 0 00:19:15.851 wide_area: 0 00:19:15.851 multicast: 1 00:19:15.851 cached: 1 00:19:15.851 [2024-07-24 18:06:22.750186] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:15.851 [2024-07-24 18:06:22.750235] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:15.851 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:15.851 cookie is 0 00:19:15.851 is_local: 1 00:19:15.851 our_own: 0 00:19:15.851 wide_area: 0 00:19:15.851 multicast: 1 00:19:15.851 cached: 1 00:19:15.851 [2024-07-24 18:06:22.750261] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:16.109 [2024-07-24 18:06:22.850185] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:16.109 [2024-07-24 18:06:22.850228] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:16.109 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:16.109 cookie is 0 00:19:16.109 is_local: 1 00:19:16.109 our_own: 0 00:19:16.109 wide_area: 0 00:19:16.109 multicast: 1 00:19:16.109 cached: 1 00:19:16.109 [2024-07-24 18:06:22.950190] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:16.109 [2024-07-24 18:06:22.950237] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:16.109 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:16.109 cookie is 0 00:19:16.109 is_local: 1 00:19:16.109 our_own: 0 00:19:16.109 wide_area: 0 00:19:16.109 multicast: 1 00:19:16.109 cached: 1 00:19:16.109 [2024-07-24 18:06:22.950263] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:16.697 [2024-07-24 18:06:23.658922] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:16.698 [2024-07-24 18:06:23.658967] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:16.698 [2024-07-24 18:06:23.658986] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:16.962 [2024-07-24 18:06:23.745052] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:19:16.962 [2024-07-24 18:06:23.805352] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:16.962 [2024-07-24 18:06:23.805398] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:16.962 [2024-07-24 18:06:23.858809] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:16.962 [2024-07-24 18:06:23.858849] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:16.962 [2024-07-24 18:06:23.858865] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:17.224 [2024-07-24 18:06:23.944937] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:19:17.224 [2024-07-24 18:06:24.005217] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:17.224 [2024-07-24 18:06:24.005282] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:20.515 18:06:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:20.515 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.516 [2024-07-24 18:06:27.075509] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:19:20.516 request: 00:19:20.516 { 00:19:20.516 "method": "bdev_nvme_start_mdns_discovery", 00:19:20.516 "params": { 00:19:20.516 "name": "cdc", 00:19:20.516 "svcname": "_nvme-disc._tcp", 00:19:20.516 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:20.516 } 00:19:20.516 } 00:19:20.516 Got JSON-RPC error response 00:19:20.516 GoRPCClient: error on JSON-RPC call 00:19:20.516 2024/07/24 18:06:27 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 93464 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 93464 00:19:20.516 [2024-07-24 18:06:27.260381] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 93493 00:19:20.516 Got SIGTERM, quitting. 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:19:20.516 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:20.516 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:20.516 avahi-daemon 0.8 exiting. 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.516 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.516 rmmod nvme_tcp 00:19:20.516 rmmod nvme_fabrics 00:19:20.775 rmmod nvme_keyring 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93409 ']' 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93409 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # '[' -z 93409 ']' 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # kill -0 93409 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # uname 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93409 00:19:20.775 killing process with pid 93409 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93409' 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@969 -- # kill 93409 00:19:20.775 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@974 -- # wait 93409 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:21.034 00:19:21.034 real 0m20.827s 00:19:21.034 user 0m40.430s 00:19:21.034 sys 0m2.533s 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.034 ************************************ 00:19:21.034 END TEST nvmf_mdns_discovery 00:19:21.034 ************************************ 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.034 ************************************ 00:19:21.034 START TEST nvmf_host_multipath 00:19:21.034 ************************************ 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:21.034 * Looking for test storage... 00:19:21.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.034 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:21.035 18:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:21.294 Cannot find device "nvmf_tgt_br" 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.294 Cannot find device "nvmf_tgt_br2" 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:21.294 Cannot find device "nvmf_tgt_br" 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:21.294 Cannot find device "nvmf_tgt_br2" 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:21.294 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:21.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:19:21.553 00:19:21.553 --- 10.0.0.2 ping statistics --- 00:19:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.553 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:21.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:21.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:21.553 00:19:21.553 --- 10.0.0.3 ping statistics --- 00:19:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.553 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:21.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:21.553 00:19:21.553 --- 10.0.0.1 ping statistics --- 00:19:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.553 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94055 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94055 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94055 ']' 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.553 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.554 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.554 18:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:21.554 [2024-07-24 18:06:28.389080] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:19:21.554 [2024-07-24 18:06:28.389171] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.554 [2024-07-24 18:06:28.527364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:21.832 [2024-07-24 18:06:28.632711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.832 [2024-07-24 18:06:28.632760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.832 [2024-07-24 18:06:28.632771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.832 [2024-07-24 18:06:28.632780] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.832 [2024-07-24 18:06:28.632787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.832 [2024-07-24 18:06:28.632922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.832 [2024-07-24 18:06:28.632923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.767 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.767 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:19:22.767 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.767 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.767 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:22.767 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.767 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94055 00:19:22.767 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:23.025 [2024-07-24 18:06:29.846176] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.025 18:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:23.283 Malloc0 00:19:23.283 18:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:23.540 18:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.147 18:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.404 [2024-07-24 18:06:31.168457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.404 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:24.662 [2024-07-24 18:06:31.516736] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94159 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94159 /var/tmp/bdevperf.sock 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94159 ']' 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.662 18:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:26.034 18:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.034 18:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:19:26.034 18:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:26.034 18:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:26.290 Nvme0n1 00:19:26.546 18:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:26.803 Nvme0n1 00:19:26.803 18:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:26.803 18:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:27.736 18:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:27.736 18:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:27.994 18:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:28.301 18:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:28.301 18:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94246 00:19:28.301 18:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94055 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:28.301 18:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:34.856 Attaching 4 probes... 00:19:34.856 @path[10.0.0.2, 4421]: 18293 00:19:34.856 @path[10.0.0.2, 4421]: 19331 00:19:34.856 @path[10.0.0.2, 4421]: 18571 00:19:34.856 @path[10.0.0.2, 4421]: 18441 00:19:34.856 @path[10.0.0.2, 4421]: 16811 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94246 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:34.856 18:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:35.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:35.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94055 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:35.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94382 00:19:35.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:41.675 Attaching 4 probes... 00:19:41.675 @path[10.0.0.2, 4420]: 19029 00:19:41.675 @path[10.0.0.2, 4420]: 19109 00:19:41.675 @path[10.0.0.2, 4420]: 19237 00:19:41.675 @path[10.0.0.2, 4420]: 19654 00:19:41.675 @path[10.0.0.2, 4420]: 19187 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94382 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:41.675 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:41.933 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:41.933 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94517 00:19:41.933 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:41.933 18:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94055 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:48.492 18:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:48.492 18:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:48.492 Attaching 4 probes... 00:19:48.492 @path[10.0.0.2, 4421]: 15429 00:19:48.492 @path[10.0.0.2, 4421]: 18874 00:19:48.492 @path[10.0.0.2, 4421]: 19418 00:19:48.492 @path[10.0.0.2, 4421]: 19201 00:19:48.492 @path[10.0.0.2, 4421]: 19046 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94517 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:48.492 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:48.750 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:48.750 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94643 00:19:48.750 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94055 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:48.750 18:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:55.314 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:55.315 Attaching 4 probes... 00:19:55.315 00:19:55.315 00:19:55.315 00:19:55.315 00:19:55.315 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94643 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:55.315 18:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:55.315 18:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:55.573 18:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:55.573 18:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94055 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:55.573 18:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94779 00:19:55.573 18:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:02.134 Attaching 4 probes... 00:20:02.134 @path[10.0.0.2, 4421]: 16423 00:20:02.134 @path[10.0.0.2, 4421]: 17896 00:20:02.134 @path[10.0.0.2, 4421]: 18209 00:20:02.134 @path[10.0.0.2, 4421]: 18202 00:20:02.134 @path[10.0.0.2, 4421]: 18017 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94779 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:02.134 18:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:02.134 [2024-07-24 18:07:09.003643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.003996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.134 [2024-07-24 18:07:09.004093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.135 [2024-07-24 18:07:09.004879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.136 [2024-07-24 18:07:09.004888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68330 is same with the state(5) to be set 00:20:02.136 18:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:03.072 18:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:03.072 18:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94909 00:20:03.072 18:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:03.072 18:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94055 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:09.632 Attaching 4 probes... 00:20:09.632 @path[10.0.0.2, 4420]: 16294 00:20:09.632 @path[10.0.0.2, 4420]: 17241 00:20:09.632 @path[10.0.0.2, 4420]: 16948 00:20:09.632 @path[10.0.0.2, 4420]: 16954 00:20:09.632 @path[10.0.0.2, 4420]: 17701 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94909 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:09.632 [2024-07-24 18:07:16.550976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:09.632 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:10.198 18:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:16.756 18:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:16.756 18:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95107 00:20:16.756 18:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:16.756 18:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94055 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:22.020 18:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:22.020 18:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:22.278 Attaching 4 probes... 00:20:22.278 @path[10.0.0.2, 4421]: 17633 00:20:22.278 @path[10.0.0.2, 4421]: 17960 00:20:22.278 @path[10.0.0.2, 4421]: 17896 00:20:22.278 @path[10.0.0.2, 4421]: 17582 00:20:22.278 @path[10.0.0.2, 4421]: 17679 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95107 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94159 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94159 ']' 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94159 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.278 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94159 00:20:22.544 killing process with pid 94159 00:20:22.544 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:22.544 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:22.544 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94159' 00:20:22.544 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94159 00:20:22.544 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94159 00:20:22.544 Connection closed with partial response: 00:20:22.544 00:20:22.544 00:20:22.544 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94159 00:20:22.544 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:22.544 [2024-07-24 18:06:31.587081] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:20:22.544 [2024-07-24 18:06:31.587330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94159 ] 00:20:22.544 [2024-07-24 18:06:31.726682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.544 [2024-07-24 18:06:31.849295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.544 Running I/O for 90 seconds... 00:20:22.544 [2024-07-24 18:06:41.984923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.544 [2024-07-24 18:06:41.984986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.544 [2024-07-24 18:06:41.985040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.985976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.985997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.986012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.986033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.986048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.986069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.986084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.986106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.986122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.986143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.986158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.986180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.986195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.986965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.986993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.987020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.987036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.987057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.987073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.987106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.987121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:22.545 [2024-07-24 18:06:41.987143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.545 [2024-07-24 18:06:41.987157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.546 [2024-07-24 18:06:41.987913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.987970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.987985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:22.546 [2024-07-24 18:06:41.988532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.546 [2024-07-24 18:06:41.988547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.988968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.988983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.989971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.989992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:22.547 [2024-07-24 18:06:41.990511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.547 [2024-07-24 18:06:41.990526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.990974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.990994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.991008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.991028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.991042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:41.991062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:41.991076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.473971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.473989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.474013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.474030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.474055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.474072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.474097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.474114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.474138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.474155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.474180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.474198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.474222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.474373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.474407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:22.548 [2024-07-24 18:06:48.474436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.548 [2024-07-24 18:06:48.474455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.474958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.474994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:22.549 [2024-07-24 18:06:48.475897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.549 [2024-07-24 18:06:48.475915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.475942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.475960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.475986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.476786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.476831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.476876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.476920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.476965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.476991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-07-24 18:06:48.477478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.477985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.478006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:22.550 [2024-07-24 18:06:48.478050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.550 [2024-07-24 18:06:48.478068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.478967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.478999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.479017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.479068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.479118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.479169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.479219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:48.479283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:48.479341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:48.479396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:48.479447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:48.479497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:48.479557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:48.479608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:48.479659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:48.479692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:48.479710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.575129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.551 [2024-07-24 18:06:55.575207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.575283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.575306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.575333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.575351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.575377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.575395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.575420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.575464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.575489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.575507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.575547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.575582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.575619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.575637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.576071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.576098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.576127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.576145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:22.551 [2024-07-24 18:06:55.576171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-07-24 18:06:55.576189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.576968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.576983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.577020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-07-24 18:06:55.577057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.552 [2024-07-24 18:06:55.577666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.552 [2024-07-24 18:06:55.577681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.577715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.577750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.577785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.577820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.577860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.577895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.577930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.577965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.577985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.578433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.578481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.578522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.578564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.578605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.578647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.553 [2024-07-24 18:06:55.578688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.578967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.578983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.579151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.579173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.579202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.579218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.579245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.579273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.579300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.579316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:22.553 [2024-07-24 18:06:55.579344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.553 [2024-07-24 18:06:55.579359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.579968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.579996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.554 [2024-07-24 18:06:55.580748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:22.554 [2024-07-24 18:06:55.580776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.580791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.580824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.580839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.580867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.580883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.580910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.580926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.580952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.580968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.580996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.581038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.581080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.581122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.581165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.581207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.581257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.581303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:06:55.581355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.555 [2024-07-24 18:06:55.581371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.005979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.005995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.006011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.006026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.006042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.006057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.006074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.006088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.006104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.006119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.006136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.006150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.006178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-07-24 18:07:09.006192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-07-24 18:07:09.006208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.006975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.006990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.007021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.007053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.007089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.007122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.007154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.007185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.007216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-07-24 18:07:09.007247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.556 [2024-07-24 18:07:09.007288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.556 [2024-07-24 18:07:09.007320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.556 [2024-07-24 18:07:09.007351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.556 [2024-07-24 18:07:09.007384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.556 [2024-07-24 18:07:09.007416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-07-24 18:07:09.007432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.007810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.007841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.007873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.007906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.007943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.007974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.007991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.008006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.008037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.008068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.008100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.008131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-07-24 18:07:09.008162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.557 [2024-07-24 18:07:09.008687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-07-24 18:07:09.008704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.008977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.008993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.558 [2024-07-24 18:07:09.009594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:22.558 [2024-07-24 18:07:09.009643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:22.558 [2024-07-24 18:07:09.009655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67192 len:8 PRP1 0x0 PRP2 0x0 00:20:22.558 [2024-07-24 18:07:09.009669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-07-24 18:07:09.009741] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13ea250 was disconnected and freed. reset controller. 00:20:22.558 [2024-07-24 18:07:09.011030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.558 [2024-07-24 18:07:09.011108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13669c0 (9): Bad file descriptor 00:20:22.558 [2024-07-24 18:07:09.011213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.558 [2024-07-24 18:07:09.011234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13669c0 with addr=10.0.0.2, port=4421 00:20:22.558 [2024-07-24 18:07:09.011271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13669c0 is same with the state(5) to be set 00:20:22.558 [2024-07-24 18:07:09.011294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13669c0 (9): Bad file descriptor 00:20:22.558 [2024-07-24 18:07:09.011315] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.558 [2024-07-24 18:07:09.011330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.558 [2024-07-24 18:07:09.011347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.558 [2024-07-24 18:07:09.011370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.558 [2024-07-24 18:07:09.011387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.558 [2024-07-24 18:07:19.093455] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:22.558 Received shutdown signal, test time was about 55.529200 seconds 00:20:22.558 00:20:22.558 Latency(us) 00:20:22.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.558 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:22.558 Verification LBA range: start 0x0 length 0x4000 00:20:22.558 Nvme0n1 : 55.53 7783.23 30.40 0.00 0.00 16420.04 503.22 7030452.42 00:20:22.558 =================================================================================================================== 00:20:22.558 Total : 7783.23 30.40 0.00 0.00 16420.04 503.22 7030452.42 00:20:22.558 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.817 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.817 rmmod nvme_tcp 00:20:22.817 rmmod nvme_fabrics 00:20:23.076 rmmod nvme_keyring 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94055 ']' 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94055 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94055 ']' 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94055 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94055 00:20:23.076 killing process with pid 94055 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94055' 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94055 00:20:23.076 18:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94055 00:20:23.335 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:23.335 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:23.335 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:23.335 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.335 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.335 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.335 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:23.336 ************************************ 00:20:23.336 END TEST nvmf_host_multipath 00:20:23.336 ************************************ 00:20:23.336 00:20:23.336 real 1m2.247s 00:20:23.336 user 2m53.013s 00:20:23.336 sys 0m17.521s 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.336 ************************************ 00:20:23.336 START TEST nvmf_timeout 00:20:23.336 ************************************ 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:23.336 * Looking for test storage... 00:20:23.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:23.336 Cannot find device "nvmf_tgt_br" 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:20:23.336 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.595 Cannot find device "nvmf_tgt_br2" 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:23.595 Cannot find device "nvmf_tgt_br" 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:23.595 Cannot find device "nvmf_tgt_br2" 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.595 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:23.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:20:23.853 00:20:23.853 --- 10.0.0.2 ping statistics --- 00:20:23.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.853 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:23.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:20:23.853 00:20:23.853 --- 10.0.0.3 ping statistics --- 00:20:23.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.853 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:23.853 00:20:23.853 --- 10.0.0.1 ping statistics --- 00:20:23.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.853 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95427 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95427 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95427 ']' 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.853 18:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:23.853 [2024-07-24 18:07:30.690436] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:20:23.854 [2024-07-24 18:07:30.690545] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.854 [2024-07-24 18:07:30.827619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:24.111 [2024-07-24 18:07:30.944638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.111 [2024-07-24 18:07:30.944716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.111 [2024-07-24 18:07:30.944735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.111 [2024-07-24 18:07:30.944749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.111 [2024-07-24 18:07:30.944763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.111 [2024-07-24 18:07:30.945580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.111 [2024-07-24 18:07:30.945595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.678 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.678 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:24.678 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.678 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:24.678 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:24.936 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.936 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.936 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:24.936 [2024-07-24 18:07:31.887295] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.194 18:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:25.453 Malloc0 00:20:25.453 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.711 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:25.711 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.968 [2024-07-24 18:07:32.890557] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=95518 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 95518 /var/tmp/bdevperf.sock 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95518 ']' 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.968 18:07:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:26.225 [2024-07-24 18:07:32.966219] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:20:26.225 [2024-07-24 18:07:32.966339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95518 ] 00:20:26.225 [2024-07-24 18:07:33.107757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.482 [2024-07-24 18:07:33.231347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.048 18:07:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.048 18:07:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:27.049 18:07:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:27.306 18:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:27.565 NVMe0n1 00:20:27.565 18:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=95561 00:20:27.565 18:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.565 18:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:27.565 Running I/O for 10 seconds... 00:20:28.540 18:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.811 [2024-07-24 18:07:35.704941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.705823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.705983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.706918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.707985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.811 [2024-07-24 18:07:35.708898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.708964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.709987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.710969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.711993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.812 [2024-07-24 18:07:35.712980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.713786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf730 is same with the state(5) to be set 00:20:28.813 [2024-07-24 18:07:35.714405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.714449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.714472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.714484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.714497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.714509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.714527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.714543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.714836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.714852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.714865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.714876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.714889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.714899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.714911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.714922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.715256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.715293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.715330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.715720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.715743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.715767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.715898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.715924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.715937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.716229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.716271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.716289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.716305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.716316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.716448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.716584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.716708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.716730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.716848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.813 [2024-07-24 18:07:35.717795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.813 [2024-07-24 18:07:35.717807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.718045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.718081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.718100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.718121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.718136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.718156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.718292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.718577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.718591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.718603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.718615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.718628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.718644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.718768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.719011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.719042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.719056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.719072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.719086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.719101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.719216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.719234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.719792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.719827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.719842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.719858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.719872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.719904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.720192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.720207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.720223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.720237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.720270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.720574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.720597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.720612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.720628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.720642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.720769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.720789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.721058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.721076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.721093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.721110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.721130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.721394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.721535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.721658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.721680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.721812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.722054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.722082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.722104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.722122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.722138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.722153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.722514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.722551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.722573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.722592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.722611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.722628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.722875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.722897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.723158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.723279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.723303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.723317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.723446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.723720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.723747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.814 [2024-07-24 18:07:35.723763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.814 [2024-07-24 18:07:35.723784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.723802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.723821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.723955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.724200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.724234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.724270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.724287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.724306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.724319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.724335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.724546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.724573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.724591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.724610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.724624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.724643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.724661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.725963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.725982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.726001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.726019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.726261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.726288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.726309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.726327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.726355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.726372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.726623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.726640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.726937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.726960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.726978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.726995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.727015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.727030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.727047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.727562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.727594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.815 [2024-07-24 18:07:35.727609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.727626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.727640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.727659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.727936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.727963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.727979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.727998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.728016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.728035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.728298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.728322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.728336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.728352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.728367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.728387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.728513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.728541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.728815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.815 [2024-07-24 18:07:35.728838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.815 [2024-07-24 18:07:35.728855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.728872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.728886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.729154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.729284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.729303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.729317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.729333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.729347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.729362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.729493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.729736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.729772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.729793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.729810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.729831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.729847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.729967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.729988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.730381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.730421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.730442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.730458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.730473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.730487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.730626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.730759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.730777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.730791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.730808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.730824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.730843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.731066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.731086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.731101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.731121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.731138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.731369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.731396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.731413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.731427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.731444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.816 [2024-07-24 18:07:35.731461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.731715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa968d0 is same with the state(5) to be set 00:20:28.816 [2024-07-24 18:07:35.731842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:28.816 [2024-07-24 18:07:35.731868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:28.816 [2024-07-24 18:07:35.732017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90064 len:8 PRP1 0x0 PRP2 0x0 00:20:28.816 [2024-07-24 18:07:35.732133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.732437] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa968d0 was disconnected and freed. reset controller. 00:20:28.816 [2024-07-24 18:07:35.732781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.816 [2024-07-24 18:07:35.732814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.732830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.816 [2024-07-24 18:07:35.732844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.732858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.816 [2024-07-24 18:07:35.732872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.732886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.816 [2024-07-24 18:07:35.732992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.816 [2024-07-24 18:07:35.733008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29240 is same with the state(5) to be set 00:20:28.816 [2024-07-24 18:07:35.733441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:28.816 [2024-07-24 18:07:35.733487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29240 (9): Bad file descriptor 00:20:28.816 [2024-07-24 18:07:35.733774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.816 [2024-07-24 18:07:35.733802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa29240 with addr=10.0.0.2, port=4420 00:20:28.816 [2024-07-24 18:07:35.733817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29240 is same with the state(5) to be set 00:20:28.816 [2024-07-24 18:07:35.733839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29240 (9): Bad file descriptor 00:20:28.817 [2024-07-24 18:07:35.733857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:28.817 [2024-07-24 18:07:35.733870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:28.817 [2024-07-24 18:07:35.733885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:28.817 [2024-07-24 18:07:35.733925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:28.817 [2024-07-24 18:07:35.733943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:28.817 18:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:30.764 [2024-07-24 18:07:37.734100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.764 [2024-07-24 18:07:37.734159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa29240 with addr=10.0.0.2, port=4420 00:20:30.764 [2024-07-24 18:07:37.734175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29240 is same with the state(5) to be set 00:20:30.764 [2024-07-24 18:07:37.734202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29240 (9): Bad file descriptor 00:20:30.764 [2024-07-24 18:07:37.734220] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:30.764 [2024-07-24 18:07:37.734231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:30.764 [2024-07-24 18:07:37.734253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:30.764 [2024-07-24 18:07:37.734280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.764 [2024-07-24 18:07:37.734291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:31.022 18:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:31.022 18:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:31.022 18:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:31.280 18:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:31.280 18:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:31.280 18:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:31.280 18:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:31.538 18:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:31.538 18:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:32.910 [2024-07-24 18:07:39.734513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:32.910 [2024-07-24 18:07:39.734581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa29240 with addr=10.0.0.2, port=4420 00:20:32.910 [2024-07-24 18:07:39.734601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29240 is same with the state(5) to be set 00:20:32.910 [2024-07-24 18:07:39.734633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29240 (9): Bad file descriptor 00:20:32.910 [2024-07-24 18:07:39.734669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:32.910 [2024-07-24 18:07:39.734684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:32.910 [2024-07-24 18:07:39.734700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:32.910 [2024-07-24 18:07:39.734733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:32.910 [2024-07-24 18:07:39.734748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:34.890 [2024-07-24 18:07:41.734886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:34.890 [2024-07-24 18:07:41.734950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:34.890 [2024-07-24 18:07:41.734963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:34.890 [2024-07-24 18:07:41.734975] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:34.890 [2024-07-24 18:07:41.735000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.887 00:20:35.887 Latency(us) 00:20:35.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.887 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.887 Verification LBA range: start 0x0 length 0x4000 00:20:35.887 NVMe0n1 : 8.20 1360.23 5.31 15.60 0.00 93029.35 1989.49 7062409.02 00:20:35.887 =================================================================================================================== 00:20:35.887 Total : 1360.23 5.31 15.60 0.00 93029.35 1989.49 7062409.02 00:20:35.887 0 00:20:36.450 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:36.450 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.450 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:36.707 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:36.707 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:36.707 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:36.707 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 95561 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 95518 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95518 ']' 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95518 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95518 00:20:36.964 killing process with pid 95518 00:20:36.964 Received shutdown signal, test time was about 9.347853 seconds 00:20:36.964 00:20:36.964 Latency(us) 00:20:36.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.964 =================================================================================================================== 00:20:36.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95518' 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95518 00:20:36.964 18:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95518 00:20:37.222 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.481 [2024-07-24 18:07:44.385903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=95723 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 95723 /var/tmp/bdevperf.sock 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95723 ']' 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.481 18:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:37.739 [2024-07-24 18:07:44.465280] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:20:37.739 [2024-07-24 18:07:44.465372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95723 ] 00:20:37.739 [2024-07-24 18:07:44.626037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.997 [2024-07-24 18:07:44.755270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.563 18:07:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.563 18:07:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:38.563 18:07:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:38.821 18:07:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:39.078 NVMe0n1 00:20:39.078 18:07:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=95772 00:20:39.078 18:07:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.078 18:07:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:39.334 Running I/O for 10 seconds... 00:20:40.296 18:07:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.296 [2024-07-24 18:07:47.250107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.250432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.250601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.250782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.250936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.251991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.252003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.252015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17e10 is same with the state(5) to be set 00:20:40.296 [2024-07-24 18:07:47.252900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.296 [2024-07-24 18:07:47.252938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.296 [2024-07-24 18:07:47.252959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.296 [2024-07-24 18:07:47.252971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.296 [2024-07-24 18:07:47.252989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.296 [2024-07-24 18:07:47.253007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.253838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.253975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.253988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.254006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.254031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.254063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.254087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.254103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.254117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.254132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.254146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.254165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.254180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.254196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.297 [2024-07-24 18:07:47.254210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.254225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.254239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.297 [2024-07-24 18:07:47.254264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.297 [2024-07-24 18:07:47.254275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.254951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.254963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.255127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.255140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.255151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.255163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.255174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.255186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.255196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.255209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.255219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.255231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.255251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.298 [2024-07-24 18:07:47.255265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.298 [2024-07-24 18:07:47.255275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.255948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.255958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.256336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.256349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.256361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.256373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.256385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.299 [2024-07-24 18:07:47.256395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.256553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.256573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86128 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.256716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.256846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.256857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.256867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86136 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.257091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.257103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.257112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.257121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86144 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.257131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.257142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.257272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.257285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86152 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.257413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.257431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.257439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.257555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86160 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.257565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.257576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.257585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.257810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86168 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.257825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.257837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.257845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.257854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86176 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.257864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.257875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.257883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.258021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86184 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.258151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.258168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.258177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.258308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86192 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.258324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.258466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.258609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.258690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86200 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.258702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.258713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.258722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.258731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86208 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.258742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.299 [2024-07-24 18:07:47.258752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.299 [2024-07-24 18:07:47.258886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.299 [2024-07-24 18:07:47.259013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86216 len:8 PRP1 0x0 PRP2 0x0 00:20:40.299 [2024-07-24 18:07:47.259029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.259040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.259175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.259308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86224 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.259320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.259331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.259622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.259741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86232 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.259753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.259765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.259774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.259925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86240 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.260046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.260065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.260074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.260363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86248 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.260446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.260458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.260466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.260475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86256 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.260485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.260496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.260504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.260513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86264 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.260640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.260656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.260665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.260779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86272 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.260794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.260805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.260933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.260945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86280 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.260956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.261191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.261201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.261210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.261220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.261231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.261378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.261394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.261628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.261640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.261649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.261658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.261669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.261679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.261687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.261696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.261706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.261918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.261933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.261942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.261953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.261964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.261972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.261981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.261991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.262225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.262238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.262257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.262268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.262279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.262290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.262299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.262309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.262320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.262418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.262427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.262438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.262449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.262457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.262561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:20:40.300 [2024-07-24 18:07:47.262575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.300 [2024-07-24 18:07:47.262588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.300 [2024-07-24 18:07:47.262596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.300 [2024-07-24 18:07:47.262710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.262726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.262738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.262871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.262886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.263011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.263027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.263036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.263113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.263128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.263139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.263147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.263156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.263166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.263177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.263414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.263430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.263441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.263452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.263462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.263471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.263481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.263623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.263765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.263779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.263893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.263911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.264152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.264168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.264180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.264191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.264199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.264208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.264218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.264328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.264343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.264352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85576 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.264363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.264491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.264504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.264513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85584 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.264646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.264658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.264780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.264793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85592 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.264804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.264938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.264948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.264957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85600 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.265186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.265203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.265212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.265221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85608 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.265232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.265260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.265374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.265389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85616 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.265400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.265411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.301 [2024-07-24 18:07:47.265522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.301 [2024-07-24 18:07:47.265536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85624 len:8 PRP1 0x0 PRP2 0x0 00:20:40.301 [2024-07-24 18:07:47.265546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.265693] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194bb20 was disconnected and freed. reset controller. 00:20:40.301 [2024-07-24 18:07:47.265986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.301 [2024-07-24 18:07:47.266011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.266024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.301 [2024-07-24 18:07:47.266035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.266046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.301 [2024-07-24 18:07:47.266057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.266067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.301 [2024-07-24 18:07:47.266077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.301 [2024-07-24 18:07:47.266316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de240 is same with the state(5) to be set 00:20:40.301 [2024-07-24 18:07:47.266728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.301 [2024-07-24 18:07:47.266759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de240 (9): Bad file descriptor 00:20:40.301 [2024-07-24 18:07:47.267059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-07-24 18:07:47.267086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de240 with addr=10.0.0.2, port=4420 00:20:40.302 [2024-07-24 18:07:47.267098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de240 is same with the state(5) to be set 00:20:40.302 [2024-07-24 18:07:47.267117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de240 (9): Bad file descriptor 00:20:40.302 [2024-07-24 18:07:47.267134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.302 [2024-07-24 18:07:47.267340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.302 [2024-07-24 18:07:47.267360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.302 [2024-07-24 18:07:47.267381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.302 [2024-07-24 18:07:47.267391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.561 18:07:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:41.495 [2024-07-24 18:07:48.267555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.495 [2024-07-24 18:07:48.267629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de240 with addr=10.0.0.2, port=4420 00:20:41.495 [2024-07-24 18:07:48.267646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de240 is same with the state(5) to be set 00:20:41.495 [2024-07-24 18:07:48.267672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de240 (9): Bad file descriptor 00:20:41.495 [2024-07-24 18:07:48.267691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.495 [2024-07-24 18:07:48.267702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.495 [2024-07-24 18:07:48.267715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.495 [2024-07-24 18:07:48.267741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.495 [2024-07-24 18:07:48.267752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.495 18:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.754 [2024-07-24 18:07:48.600380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.754 18:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 95772 00:20:42.320 [2024-07-24 18:07:49.284081] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:50.444 00:20:50.444 Latency(us) 00:20:50.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.444 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.444 Verification LBA range: start 0x0 length 0x4000 00:20:50.444 NVMe0n1 : 10.01 6606.01 25.80 0.00 0.00 19347.01 1934.87 3035877.18 00:20:50.444 =================================================================================================================== 00:20:50.444 Total : 6606.01 25.80 0.00 0.00 19347.01 1934.87 3035877.18 00:20:50.444 0 00:20:50.444 18:07:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=95889 00:20:50.445 18:07:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:50.445 18:07:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:50.445 Running I/O for 10 seconds... 00:20:50.445 18:07:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.445 [2024-07-24 18:07:57.396157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.396904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16310 is same with the state(5) to be set 00:20:50.445 [2024-07-24 18:07:57.398815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.445 [2024-07-24 18:07:57.398857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.445 [2024-07-24 18:07:57.398879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.445 [2024-07-24 18:07:57.398890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.445 [2024-07-24 18:07:57.398905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.398915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.398935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.398949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.399855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.399969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.400082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.400101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.400113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.400229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.400378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.400518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.400538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.400665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.400679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.400811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.400825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.400947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.400961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.401102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.401210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.401233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.401270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.401293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.446 [2024-07-24 18:07:57.401448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.401711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.401741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.401838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.401861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.401883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.401991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.402002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.402015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.402025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.402159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.402171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.402307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.402403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.402417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.402428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.402440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.402550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.402563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.402573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.446 [2024-07-24 18:07:57.402586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.446 [2024-07-24 18:07:57.402706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.402720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.402817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.402831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.402841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.402854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.402864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.403201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.403230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.403273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.403302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.403444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.403482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.403776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.403809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.403825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.404166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.404189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.404203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.404219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.404237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.404268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.404282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.404667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.404682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.404700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.404718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.404734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.404747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.447 [2024-07-24 18:07:57.405027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.405959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.405972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.406254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.406276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.406292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.406306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.406321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.406335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.406350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.406364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.406644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.406659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.406675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.406690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.406706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.406720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.407018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.407116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.407132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.447 [2024-07-24 18:07:57.407146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.447 [2024-07-24 18:07:57.407162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.407176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.407191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.407204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.407220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.407375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.407392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.407406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.407728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.407742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.407754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.407765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.407777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.407788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.407800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.407810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.407822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.408788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.408808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.448 [2024-07-24 18:07:57.409653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.448 [2024-07-24 18:07:57.409707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79544 len:8 PRP1 0x0 PRP2 0x0 00:20:50.448 [2024-07-24 18:07:57.409942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.448 [2024-07-24 18:07:57.409968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.448 [2024-07-24 18:07:57.409978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:20:50.448 [2024-07-24 18:07:57.409988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.409999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.448 [2024-07-24 18:07:57.410008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.448 [2024-07-24 18:07:57.410016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:20:50.448 [2024-07-24 18:07:57.410026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.410036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.448 [2024-07-24 18:07:57.410323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.448 [2024-07-24 18:07:57.410333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79568 len:8 PRP1 0x0 PRP2 0x0 00:20:50.448 [2024-07-24 18:07:57.410343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.410354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.448 [2024-07-24 18:07:57.410363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.448 [2024-07-24 18:07:57.410371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:20:50.448 [2024-07-24 18:07:57.410382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.448 [2024-07-24 18:07:57.410392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.448 [2024-07-24 18:07:57.410400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.410409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.410419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.411008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.411060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.411084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.411109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.411132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.411152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.411507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.411573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.411598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.411615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.411636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.411659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.412119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.412150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.412171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.412194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.412216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.412234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.412589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.412618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.412640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.412659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.412680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.413159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.413186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.413203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.413224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.413692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.413754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.413773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.413793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.413817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.414160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.414183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.414202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.414224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.414694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.414724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.414744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.414765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.414786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.414802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.415133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.415176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.415204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.415221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.415676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.415726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.415750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.415767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.415786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.415807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.416227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.416262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.416282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.416303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.416324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.416635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.416658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.416679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.449 [2024-07-24 18:07:57.416700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.449 [2024-07-24 18:07:57.417113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.449 [2024-07-24 18:07:57.417143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:20:50.449 [2024-07-24 18:07:57.417166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.708 [2024-07-24 18:07:57.417619] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x195dcf0 was disconnected and freed. reset controller. 00:20:50.708 [2024-07-24 18:07:57.418096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.708 [2024-07-24 18:07:57.418145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.708 [2024-07-24 18:07:57.418171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.708 [2024-07-24 18:07:57.418192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.708 [2024-07-24 18:07:57.418214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.708 [2024-07-24 18:07:57.418234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.708 [2024-07-24 18:07:57.418582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.708 [2024-07-24 18:07:57.418609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.708 [2024-07-24 18:07:57.418630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de240 is same with the state(5) to be set 00:20:50.708 [2024-07-24 18:07:57.419386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:50.708 [2024-07-24 18:07:57.419447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de240 (9): Bad file descriptor 00:20:50.708 [2024-07-24 18:07:57.419909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.708 [2024-07-24 18:07:57.419963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de240 with addr=10.0.0.2, port=4420 00:20:50.708 [2024-07-24 18:07:57.419986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de240 is same with the state(5) to be set 00:20:50.708 [2024-07-24 18:07:57.420021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de240 (9): Bad file descriptor 00:20:50.708 [2024-07-24 18:07:57.420400] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:50.708 [2024-07-24 18:07:57.420440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:50.708 [2024-07-24 18:07:57.420462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:50.708 [2024-07-24 18:07:57.420496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:50.708 [2024-07-24 18:07:57.420517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:50.708 18:07:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:51.641 [2024-07-24 18:07:58.420967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.641 [2024-07-24 18:07:58.421041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de240 with addr=10.0.0.2, port=4420 00:20:51.641 [2024-07-24 18:07:58.421057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de240 is same with the state(5) to be set 00:20:51.641 [2024-07-24 18:07:58.421083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de240 (9): Bad file descriptor 00:20:51.641 [2024-07-24 18:07:58.421102] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:51.641 [2024-07-24 18:07:58.421113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:51.641 [2024-07-24 18:07:58.421126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:51.641 [2024-07-24 18:07:58.421153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:51.641 [2024-07-24 18:07:58.421165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:52.573 [2024-07-24 18:07:59.421326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.573 [2024-07-24 18:07:59.421397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de240 with addr=10.0.0.2, port=4420 00:20:52.573 [2024-07-24 18:07:59.421414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de240 is same with the state(5) to be set 00:20:52.573 [2024-07-24 18:07:59.421441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de240 (9): Bad file descriptor 00:20:52.573 [2024-07-24 18:07:59.421460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:52.573 [2024-07-24 18:07:59.421483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:52.573 [2024-07-24 18:07:59.421495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:52.573 [2024-07-24 18:07:59.421521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.573 [2024-07-24 18:07:59.421532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:53.508 [2024-07-24 18:08:00.423681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.508 [2024-07-24 18:08:00.423901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de240 with addr=10.0.0.2, port=4420 00:20:53.508 [2024-07-24 18:08:00.424128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de240 is same with the state(5) to be set 00:20:53.508 [2024-07-24 18:08:00.424537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de240 (9): Bad file descriptor 00:20:53.508 [2024-07-24 18:08:00.424920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:53.508 [2024-07-24 18:08:00.424942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:53.508 [2024-07-24 18:08:00.424955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:53.508 18:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.508 [2024-07-24 18:08:00.428743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.508 [2024-07-24 18:08:00.428780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:53.765 [2024-07-24 18:08:00.622021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.765 18:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 95889 00:20:54.696 [2024-07-24 18:08:01.462412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:59.999 00:20:59.999 Latency(us) 00:20:59.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.999 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:59.999 Verification LBA range: start 0x0 length 0x4000 00:20:59.999 NVMe0n1 : 10.01 5442.25 21.26 4104.89 0.00 13382.11 581.24 3035877.18 00:20:59.999 =================================================================================================================== 00:20:59.999 Total : 5442.25 21.26 4104.89 0.00 13382.11 0.00 3035877.18 00:20:59.999 0 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 95723 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95723 ']' 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95723 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95723 00:20:59.999 killing process with pid 95723 00:20:59.999 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.999 00:20:59.999 Latency(us) 00:20:59.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.999 =================================================================================================================== 00:20:59.999 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95723' 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95723 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95723 00:20:59.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96017 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96017 /var/tmp/bdevperf.sock 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96017 ']' 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.999 18:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:59.999 [2024-07-24 18:08:06.611050] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:20:59.999 [2024-07-24 18:08:06.611164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96017 ] 00:20:59.999 [2024-07-24 18:08:06.758859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.999 [2024-07-24 18:08:06.868812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.930 18:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.930 18:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:21:00.930 18:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96041 00:21:00.930 18:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:00.930 18:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96017 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:01.189 18:08:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:01.447 NVMe0n1 00:21:01.447 18:08:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96094 00:21:01.447 18:08:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:01.447 18:08:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.703 Running I/O for 10 seconds... 00:21:02.630 18:08:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.889 [2024-07-24 18:08:09.637733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.637896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19c80 is same with the state(5) to be set 00:21:02.889 [2024-07-24 18:08:09.639023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.889 [2024-07-24 18:08:09.639967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.889 [2024-07-24 18:08:09.639977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.639989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.640989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.640999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.641900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.641910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.642052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.642129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.642144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.642155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.642168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.642178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.642190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.890 [2024-07-24 18:08:09.642201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.890 [2024-07-24 18:08:09.642302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.642988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.642998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.643595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.643605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.891 [2024-07-24 18:08:09.644530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.891 [2024-07-24 18:08:09.644543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.892 [2024-07-24 18:08:09.644554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.644566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.892 [2024-07-24 18:08:09.644576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.644589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.892 [2024-07-24 18:08:09.644698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.644830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.644844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3256 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.644854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.644870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.645040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.645147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49704 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.645158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.645170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.645179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.645188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115112 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.645281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.645292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.645300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.645310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31848 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.645320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.645331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.645704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.645721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.645732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.645743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.645752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.645760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105072 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.645771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.645781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.645789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.645798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71512 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.645809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.645819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.646108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.646182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10344 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.646193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.646204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.646213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.646222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114952 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.646232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.646254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.646263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.646272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.646282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.646296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.646419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.646661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77288 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.646674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.646685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.646694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.646703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125728 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.646713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.646723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.646732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.646741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.646751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.646761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.646769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.646880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130504 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.646892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.646902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.646911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.646919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69080 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.647020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.647031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.647039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.647048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35152 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.647058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.647074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.647437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.647482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.647506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.647556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.647573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.648060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34432 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.648087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.648111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.648131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.648644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130832 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.648671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.648709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.649182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.649204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.649227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.649687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.892 [2024-07-24 18:08:09.649719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.892 [2024-07-24 18:08:09.649740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113720 len:8 PRP1 0x0 PRP2 0x0 00:21:02.892 [2024-07-24 18:08:09.649761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.892 [2024-07-24 18:08:09.649784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.649802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.650140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58776 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.650178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.650202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.650219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.650238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71960 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.650707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.650734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.650753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.650771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101664 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.651202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.651260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.651281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.651300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84512 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.651322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.651716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.651736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.651756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36496 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.651779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.652161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.652182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.652202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.652223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.652580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.652601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.652620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85456 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.652642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.652995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.653017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.653037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46080 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.653059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.653505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.653537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.653559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65664 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.653581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.653604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.653980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.654017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86704 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.654041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.654065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.654084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.654462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114784 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.654499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.654524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.654543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.654562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.655027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.655052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.655070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.655525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120320 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.655603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.655629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.893 [2024-07-24 18:08:09.655648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.893 [2024-07-24 18:08:09.655668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57008 len:8 PRP1 0x0 PRP2 0x0 00:21:02.893 [2024-07-24 18:08:09.656121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.656565] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23bf8d0 was disconnected and freed. reset controller. 00:21:02.893 [2024-07-24 18:08:09.657014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.893 [2024-07-24 18:08:09.657065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.657093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.893 [2024-07-24 18:08:09.657116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.657541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.893 [2024-07-24 18:08:09.657567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.657591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.893 [2024-07-24 18:08:09.658040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.893 [2024-07-24 18:08:09.658079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352240 is same with the state(5) to be set 00:21:02.893 [2024-07-24 18:08:09.658770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.893 [2024-07-24 18:08:09.658811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352240 (9): Bad file descriptor 00:21:02.893 [2024-07-24 18:08:09.659188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.893 [2024-07-24 18:08:09.659226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352240 with addr=10.0.0.2, port=4420 00:21:02.893 [2024-07-24 18:08:09.659257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352240 is same with the state(5) to be set 00:21:02.893 [2024-07-24 18:08:09.659281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352240 (9): Bad file descriptor 00:21:02.893 [2024-07-24 18:08:09.659649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.893 [2024-07-24 18:08:09.659679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:02.893 [2024-07-24 18:08:09.659696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.893 [2024-07-24 18:08:09.659723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:02.893 [2024-07-24 18:08:09.660046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.893 18:08:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96094 00:21:04.793 [2024-07-24 18:08:11.660292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.793 [2024-07-24 18:08:11.660362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352240 with addr=10.0.0.2, port=4420 00:21:04.793 [2024-07-24 18:08:11.660379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352240 is same with the state(5) to be set 00:21:04.793 [2024-07-24 18:08:11.660405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352240 (9): Bad file descriptor 00:21:04.793 [2024-07-24 18:08:11.660424] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:04.793 [2024-07-24 18:08:11.660435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:04.793 [2024-07-24 18:08:11.660449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.793 [2024-07-24 18:08:11.660475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.793 [2024-07-24 18:08:11.660486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.694 [2024-07-24 18:08:13.660697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.694 [2024-07-24 18:08:13.660775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352240 with addr=10.0.0.2, port=4420 00:21:06.694 [2024-07-24 18:08:13.660792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352240 is same with the state(5) to be set 00:21:06.694 [2024-07-24 18:08:13.660819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352240 (9): Bad file descriptor 00:21:06.694 [2024-07-24 18:08:13.660838] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:06.694 [2024-07-24 18:08:13.660849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:06.694 [2024-07-24 18:08:13.660861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:06.694 [2024-07-24 18:08:13.660887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.694 [2024-07-24 18:08:13.660898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:09.254 [2024-07-24 18:08:15.660988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:09.254 [2024-07-24 18:08:15.661053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:09.254 [2024-07-24 18:08:15.661071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:09.254 [2024-07-24 18:08:15.661088] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:09.254 [2024-07-24 18:08:15.661126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:09.818 00:21:09.818 Latency(us) 00:21:09.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.818 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:09.818 NVMe0n1 : 8.22 2579.87 10.08 15.56 0.00 49257.62 3198.78 7030452.42 00:21:09.818 =================================================================================================================== 00:21:09.818 Total : 2579.87 10.08 15.56 0.00 49257.62 3198.78 7030452.42 00:21:09.818 0 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:09.818 Attaching 5 probes... 00:21:09.818 1417.945193: reset bdev controller NVMe0 00:21:09.818 1418.283527: reconnect bdev controller NVMe0 00:21:09.818 3419.306393: reconnect delay bdev controller NVMe0 00:21:09.818 3419.332385: reconnect bdev controller NVMe0 00:21:09.818 5419.729588: reconnect delay bdev controller NVMe0 00:21:09.818 5419.756316: reconnect bdev controller NVMe0 00:21:09.818 7420.140948: reconnect delay bdev controller NVMe0 00:21:09.818 7420.166248: reconnect bdev controller NVMe0 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96041 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96017 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96017 ']' 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96017 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96017 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:09.818 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:09.819 killing process with pid 96017 00:21:09.819 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96017' 00:21:09.819 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96017 00:21:09.819 Received shutdown signal, test time was about 8.296796 seconds 00:21:09.819 00:21:09.819 Latency(us) 00:21:09.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.819 =================================================================================================================== 00:21:09.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.819 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96017 00:21:10.076 18:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.333 rmmod nvme_tcp 00:21:10.333 rmmod nvme_fabrics 00:21:10.333 rmmod nvme_keyring 00:21:10.333 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95427 ']' 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95427 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95427 ']' 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95427 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95427 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:10.334 killing process with pid 95427 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95427' 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95427 00:21:10.334 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95427 00:21:10.591 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:10.591 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:10.591 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:10.591 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.591 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.591 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.591 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.591 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.848 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:10.848 00:21:10.848 real 0m47.424s 00:21:10.848 user 2m18.519s 00:21:10.848 sys 0m6.235s 00:21:10.848 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.848 ************************************ 00:21:10.848 END TEST nvmf_timeout 00:21:10.848 18:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:10.848 ************************************ 00:21:10.848 18:08:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:10.848 18:08:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:10.848 00:21:10.848 real 5m42.063s 00:21:10.848 user 14m38.190s 00:21:10.848 sys 1m15.676s 00:21:10.848 18:08:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.848 18:08:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.848 ************************************ 00:21:10.848 END TEST nvmf_host 00:21:10.848 ************************************ 00:21:10.848 00:21:10.848 real 15m39.137s 00:21:10.848 user 40m52.611s 00:21:10.848 sys 3m59.855s 00:21:10.848 18:08:17 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.848 18:08:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.848 ************************************ 00:21:10.848 END TEST nvmf_tcp 00:21:10.848 ************************************ 00:21:10.848 18:08:17 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:21:10.848 18:08:17 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:10.848 18:08:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:10.848 18:08:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.848 18:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:10.848 ************************************ 00:21:10.848 START TEST spdkcli_nvmf_tcp 00:21:10.848 ************************************ 00:21:10.848 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:11.108 * Looking for test storage... 00:21:11.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96317 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96317 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 96317 ']' 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:11.108 18:08:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:11.108 [2024-07-24 18:08:17.935716] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:21:11.108 [2024-07-24 18:08:17.935846] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96317 ] 00:21:11.109 [2024-07-24 18:08:18.080158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:11.374 [2024-07-24 18:08:18.208522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.374 [2024-07-24 18:08:18.208541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.938 18:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.938 18:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:21:11.938 18:08:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:11.938 18:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.938 18:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:12.197 18:08:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:12.197 18:08:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:12.197 18:08:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:12.197 18:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.197 18:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:12.197 18:08:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:12.197 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:12.197 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:12.197 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:12.197 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:12.197 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:12.197 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:12.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:12.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:12.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:12.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:12.197 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:12.197 ' 00:21:14.727 [2024-07-24 18:08:21.670552] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.101 [2024-07-24 18:08:22.951743] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:21:18.633 [2024-07-24 18:08:25.329516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:20.586 [2024-07-24 18:08:27.411168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:22.488 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:22.488 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:22.488 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:22.488 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:22.488 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:22.488 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:22.488 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:22.488 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:22.488 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:22.488 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:22.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:22.488 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:22.488 18:08:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:22.488 18:08:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.488 18:08:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 18:08:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:22.488 18:08:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.488 18:08:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 18:08:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:21:22.488 18:08:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.748 18:08:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:22.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:22.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:22.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:22.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:22.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:22.748 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:22.748 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:22.748 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:22.748 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:22.748 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:22.748 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:22.748 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:22.748 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:22.748 ' 00:21:29.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:29.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:29.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:29.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:29.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:29.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:29.313 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:29.313 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:29.313 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:29.313 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:29.313 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:29.313 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:29.313 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:29.313 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96317 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 96317 ']' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 96317 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96317 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:29.313 killing process with pid 96317 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96317' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 96317 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 96317 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96317 ']' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96317 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 96317 ']' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 96317 00:21:29.313 Process with pid 96317 is not found 00:21:29.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (96317) - No such process 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 96317 is not found' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:29.313 00:21:29.313 real 0m17.719s 00:21:29.313 user 0m38.511s 00:21:29.313 sys 0m1.044s 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:29.313 ************************************ 00:21:29.313 END TEST spdkcli_nvmf_tcp 00:21:29.313 ************************************ 00:21:29.313 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:29.313 18:08:35 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:29.313 18:08:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:29.313 18:08:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:29.313 18:08:35 -- common/autotest_common.sh@10 -- # set +x 00:21:29.313 ************************************ 00:21:29.313 START TEST nvmf_identify_passthru 00:21:29.313 ************************************ 00:21:29.313 18:08:35 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:29.313 * Looking for test storage... 00:21:29.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:29.313 18:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:29.313 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:21:29.313 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.313 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.313 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:29.314 18:08:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:29.314 18:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:29.314 18:08:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:21:29.314 18:08:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.314 18:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.314 18:08:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:29.314 18:08:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:29.314 Cannot find device "nvmf_tgt_br" 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:29.314 Cannot find device "nvmf_tgt_br2" 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:29.314 Cannot find device "nvmf_tgt_br" 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:29.314 Cannot find device "nvmf_tgt_br2" 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:29.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:29.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:29.314 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:29.315 18:08:35 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:29.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:21:29.315 00:21:29.315 --- 10.0.0.2 ping statistics --- 00:21:29.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.315 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:29.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:29.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:21:29.315 00:21:29.315 --- 10.0.0.3 ping statistics --- 00:21:29.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.315 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:29.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:29.315 00:21:29.315 --- 10.0.0.1 ping statistics --- 00:21:29.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.315 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.315 18:08:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.315 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:29.315 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:29.315 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:21:29.315 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:21:29.315 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:21:29.315 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:29.315 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:29.315 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:29.573 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:29.573 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:29.573 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.573 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=96811 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:29.573 18:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 96811 00:21:29.832 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 96811 ']' 00:21:29.832 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.832 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.832 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.832 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.832 18:08:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:29.832 [2024-07-24 18:08:36.611468] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:21:29.832 [2024-07-24 18:08:36.611587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.832 [2024-07-24 18:08:36.754973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.091 [2024-07-24 18:08:36.875845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.091 [2024-07-24 18:08:36.875912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.091 [2024-07-24 18:08:36.875927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.091 [2024-07-24 18:08:36.875940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.091 [2024-07-24 18:08:36.875950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.091 [2024-07-24 18:08:36.876096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.091 [2024-07-24 18:08:36.876901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.091 [2024-07-24 18:08:36.877037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.091 [2024-07-24 18:08:36.877040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 [2024-07-24 18:08:37.722431] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 [2024-07-24 18:08:37.731919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 Nvme0n1 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 [2024-07-24 18:08:37.875208] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 [ 00:21:31.025 { 00:21:31.025 "allow_any_host": true, 00:21:31.025 "hosts": [], 00:21:31.025 "listen_addresses": [], 00:21:31.025 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:31.025 "subtype": "Discovery" 00:21:31.025 }, 00:21:31.025 { 00:21:31.025 "allow_any_host": true, 00:21:31.025 "hosts": [], 00:21:31.025 "listen_addresses": [ 00:21:31.025 { 00:21:31.025 "adrfam": "IPv4", 00:21:31.025 "traddr": "10.0.0.2", 00:21:31.025 "trsvcid": "4420", 00:21:31.025 "trtype": "TCP" 00:21:31.025 } 00:21:31.025 ], 00:21:31.025 "max_cntlid": 65519, 00:21:31.025 "max_namespaces": 1, 00:21:31.025 "min_cntlid": 1, 00:21:31.025 "model_number": "SPDK bdev Controller", 00:21:31.025 "namespaces": [ 00:21:31.025 { 00:21:31.025 "bdev_name": "Nvme0n1", 00:21:31.025 "name": "Nvme0n1", 00:21:31.025 "nguid": "8B6D75054E2E4F93BA1B9E009B5D9AB0", 00:21:31.025 "nsid": 1, 00:21:31.025 "uuid": "8b6d7505-4e2e-4f93-ba1b-9e009b5d9ab0" 00:21:31.025 } 00:21:31.025 ], 00:21:31.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.025 "serial_number": "SPDK00000000000001", 00:21:31.025 "subtype": "NVMe" 00:21:31.025 } 00:21:31.025 ] 00:21:31.025 18:08:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:31.025 18:08:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:21:31.283 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:21:31.283 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:21:31.283 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:21:31.283 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:31.542 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:21:31.542 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:21:31.542 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:21:31.542 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.542 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:21:31.542 18:08:38 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.542 rmmod nvme_tcp 00:21:31.542 rmmod nvme_fabrics 00:21:31.542 rmmod nvme_keyring 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 96811 ']' 00:21:31.542 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 96811 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 96811 ']' 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 96811 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96811 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:31.542 killing process with pid 96811 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96811' 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 96811 00:21:31.542 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 96811 00:21:31.800 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.800 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.800 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.800 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.800 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.800 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.800 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:31.800 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.800 18:08:38 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:31.800 00:21:31.800 real 0m3.230s 00:21:31.800 user 0m7.819s 00:21:31.800 sys 0m0.953s 00:21:31.800 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:31.800 18:08:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.800 ************************************ 00:21:31.800 END TEST nvmf_identify_passthru 00:21:31.800 ************************************ 00:21:32.059 18:08:38 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:32.059 18:08:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:32.059 18:08:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:32.059 18:08:38 -- common/autotest_common.sh@10 -- # set +x 00:21:32.059 ************************************ 00:21:32.059 START TEST nvmf_dif 00:21:32.059 ************************************ 00:21:32.059 18:08:38 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:32.059 * Looking for test storage... 00:21:32.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:32.059 18:08:38 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:32.059 18:08:38 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.059 18:08:38 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.059 18:08:38 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.059 18:08:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.059 18:08:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.059 18:08:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.059 18:08:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:32.059 18:08:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.059 18:08:38 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.059 18:08:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:32.059 18:08:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:32.059 18:08:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:32.059 18:08:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:32.059 18:08:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.060 18:08:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:32.060 18:08:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:32.060 Cannot find device "nvmf_tgt_br" 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@155 -- # true 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:32.060 Cannot find device "nvmf_tgt_br2" 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@156 -- # true 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:32.060 18:08:38 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:32.060 Cannot find device "nvmf_tgt_br" 00:21:32.060 18:08:39 nvmf_dif -- nvmf/common.sh@158 -- # true 00:21:32.060 18:08:39 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:32.060 Cannot find device "nvmf_tgt_br2" 00:21:32.060 18:08:39 nvmf_dif -- nvmf/common.sh@159 -- # true 00:21:32.060 18:08:39 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:32.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:32.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:32.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:21:32.384 00:21:32.384 --- 10.0.0.2 ping statistics --- 00:21:32.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.384 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:32.384 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:32.384 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:21:32.384 00:21:32.384 --- 10.0.0.3 ping statistics --- 00:21:32.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.384 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:32.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:21:32.384 00:21:32.384 --- 10.0.0.1 ping statistics --- 00:21:32.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.384 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:32.384 18:08:39 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:32.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:32.950 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:32.950 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.950 18:08:39 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:32.950 18:08:39 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.950 18:08:39 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.950 18:08:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97156 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97156 00:21:32.950 18:08:39 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:32.950 18:08:39 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 97156 ']' 00:21:32.950 18:08:39 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.950 18:08:39 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:32.950 18:08:39 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.950 18:08:39 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:32.950 18:08:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:32.950 [2024-07-24 18:08:39.843839] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:21:32.950 [2024-07-24 18:08:39.843956] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.207 [2024-07-24 18:08:39.992789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.207 [2024-07-24 18:08:40.113304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.207 [2024-07-24 18:08:40.113364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.207 [2024-07-24 18:08:40.113386] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.207 [2024-07-24 18:08:40.113402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.207 [2024-07-24 18:08:40.113416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.207 [2024-07-24 18:08:40.113459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:21:34.140 18:08:40 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:34.140 18:08:40 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.140 18:08:40 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:34.140 18:08:40 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:34.140 [2024-07-24 18:08:40.959083] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.140 18:08:40 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:34.140 18:08:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:34.140 ************************************ 00:21:34.140 START TEST fio_dif_1_default 00:21:34.140 ************************************ 00:21:34.140 18:08:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:21:34.140 18:08:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:34.140 18:08:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:34.140 18:08:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:34.140 18:08:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:34.140 18:08:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:34.141 18:08:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:34.141 18:08:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.141 18:08:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:34.141 bdev_null0 00:21:34.141 18:08:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.141 18:08:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:34.141 18:08:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.141 18:08:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:34.141 [2024-07-24 18:08:41.023231] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.141 { 00:21:34.141 "params": { 00:21:34.141 "name": "Nvme$subsystem", 00:21:34.141 "trtype": "$TEST_TRANSPORT", 00:21:34.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.141 "adrfam": "ipv4", 00:21:34.141 "trsvcid": "$NVMF_PORT", 00:21:34.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.141 "hdgst": ${hdgst:-false}, 00:21:34.141 "ddgst": ${ddgst:-false} 00:21:34.141 }, 00:21:34.141 "method": "bdev_nvme_attach_controller" 00:21:34.141 } 00:21:34.141 EOF 00:21:34.141 )") 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:34.141 "params": { 00:21:34.141 "name": "Nvme0", 00:21:34.141 "trtype": "tcp", 00:21:34.141 "traddr": "10.0.0.2", 00:21:34.141 "adrfam": "ipv4", 00:21:34.141 "trsvcid": "4420", 00:21:34.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:34.141 "hdgst": false, 00:21:34.141 "ddgst": false 00:21:34.141 }, 00:21:34.141 "method": "bdev_nvme_attach_controller" 00:21:34.141 }' 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:34.141 18:08:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:34.427 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:34.427 fio-3.35 00:21:34.427 Starting 1 thread 00:21:46.635 00:21:46.635 filename0: (groupid=0, jobs=1): err= 0: pid=97246: Wed Jul 24 18:08:51 2024 00:21:46.635 read: IOPS=887, BW=3552KiB/s (3637kB/s)(34.7MiB/10010msec) 00:21:46.635 slat (nsec): min=6469, max=74055, avg=8097.30, stdev=3804.56 00:21:46.635 clat (usec): min=363, max=43061, avg=4481.81, stdev=12099.01 00:21:46.635 lat (usec): min=369, max=43070, avg=4489.91, stdev=12099.17 00:21:46.635 clat percentiles (usec): 00:21:46.635 | 1.00th=[ 396], 5.00th=[ 412], 10.00th=[ 429], 20.00th=[ 449], 00:21:46.635 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 498], 00:21:46.635 | 70.00th=[ 506], 80.00th=[ 519], 90.00th=[ 685], 95.00th=[41157], 00:21:46.635 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:21:46.635 | 99.99th=[43254] 00:21:46.635 bw ( KiB/s): min= 1152, max= 6080, per=100.00%, avg=3553.60, stdev=1833.96, samples=20 00:21:46.635 iops : min= 288, max= 1520, avg=888.40, stdev=458.49, samples=20 00:21:46.635 lat (usec) : 500=62.17%, 750=27.93% 00:21:46.635 lat (msec) : 4=0.05%, 50=9.86% 00:21:46.635 cpu : usr=84.40%, sys=14.80%, ctx=16, majf=0, minf=9 00:21:46.635 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.635 issued rwts: total=8888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.635 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:46.635 00:21:46.635 Run status group 0 (all jobs): 00:21:46.635 READ: bw=3552KiB/s (3637kB/s), 3552KiB/s-3552KiB/s (3637kB/s-3637kB/s), io=34.7MiB (36.4MB), run=10010-10010msec 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:46.635 ************************************ 00:21:46.635 END TEST fio_dif_1_default 00:21:46.635 ************************************ 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.635 00:21:46.635 real 0m11.089s 00:21:46.635 user 0m9.157s 00:21:46.635 sys 0m1.777s 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:46.635 18:08:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:46.635 18:08:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:46.635 18:08:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.635 18:08:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:46.635 ************************************ 00:21:46.635 START TEST fio_dif_1_multi_subsystems 00:21:46.635 ************************************ 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:46.635 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 bdev_null0 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 [2024-07-24 18:08:52.156497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 bdev_null1 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.636 { 00:21:46.636 "params": { 00:21:46.636 "name": "Nvme$subsystem", 00:21:46.636 "trtype": "$TEST_TRANSPORT", 00:21:46.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.636 "adrfam": "ipv4", 00:21:46.636 "trsvcid": "$NVMF_PORT", 00:21:46.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.636 "hdgst": ${hdgst:-false}, 00:21:46.636 "ddgst": ${ddgst:-false} 00:21:46.636 }, 00:21:46.636 "method": "bdev_nvme_attach_controller" 00:21:46.636 } 00:21:46.636 EOF 00:21:46.636 )") 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.636 { 00:21:46.636 "params": { 00:21:46.636 "name": "Nvme$subsystem", 00:21:46.636 "trtype": "$TEST_TRANSPORT", 00:21:46.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.636 "adrfam": "ipv4", 00:21:46.636 "trsvcid": "$NVMF_PORT", 00:21:46.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.636 "hdgst": ${hdgst:-false}, 00:21:46.636 "ddgst": ${ddgst:-false} 00:21:46.636 }, 00:21:46.636 "method": "bdev_nvme_attach_controller" 00:21:46.636 } 00:21:46.636 EOF 00:21:46.636 )") 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:46.636 "params": { 00:21:46.636 "name": "Nvme0", 00:21:46.636 "trtype": "tcp", 00:21:46.636 "traddr": "10.0.0.2", 00:21:46.636 "adrfam": "ipv4", 00:21:46.636 "trsvcid": "4420", 00:21:46.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:46.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:46.636 "hdgst": false, 00:21:46.636 "ddgst": false 00:21:46.636 }, 00:21:46.636 "method": "bdev_nvme_attach_controller" 00:21:46.636 },{ 00:21:46.636 "params": { 00:21:46.636 "name": "Nvme1", 00:21:46.636 "trtype": "tcp", 00:21:46.636 "traddr": "10.0.0.2", 00:21:46.636 "adrfam": "ipv4", 00:21:46.636 "trsvcid": "4420", 00:21:46.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.636 "hdgst": false, 00:21:46.636 "ddgst": false 00:21:46.636 }, 00:21:46.636 "method": "bdev_nvme_attach_controller" 00:21:46.636 }' 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:46.636 18:08:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:46.636 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:46.637 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:46.637 fio-3.35 00:21:46.637 Starting 2 threads 00:21:56.701 00:21:56.701 filename0: (groupid=0, jobs=1): err= 0: pid=97405: Wed Jul 24 18:09:03 2024 00:21:56.701 read: IOPS=208, BW=836KiB/s (856kB/s)(8368KiB/10010msec) 00:21:56.701 slat (nsec): min=6580, max=58374, avg=11979.07, stdev=8164.51 00:21:56.701 clat (usec): min=355, max=42489, avg=19100.63, stdev=20184.24 00:21:56.701 lat (usec): min=362, max=42499, avg=19112.61, stdev=20184.17 00:21:56.701 clat percentiles (usec): 00:21:56.701 | 1.00th=[ 371], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 429], 00:21:56.701 | 30.00th=[ 445], 40.00th=[ 461], 50.00th=[ 490], 60.00th=[40633], 00:21:56.701 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:56.701 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:21:56.701 | 99.99th=[42730] 00:21:56.701 bw ( KiB/s): min= 544, max= 1152, per=42.97%, avg=835.20, stdev=162.47, samples=20 00:21:56.701 iops : min= 136, max= 288, avg=208.80, stdev=40.62, samples=20 00:21:56.701 lat (usec) : 500=51.48%, 750=1.82%, 1000=0.43% 00:21:56.701 lat (msec) : 2=0.19%, 50=46.08% 00:21:56.701 cpu : usr=94.08%, sys=4.94%, ctx=117, majf=0, minf=9 00:21:56.701 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.701 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.701 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:56.701 filename1: (groupid=0, jobs=1): err= 0: pid=97406: Wed Jul 24 18:09:03 2024 00:21:56.701 read: IOPS=276, BW=1107KiB/s (1134kB/s)(10.8MiB/10012msec) 00:21:56.701 slat (nsec): min=6134, max=94134, avg=10185.19, stdev=7407.73 00:21:56.701 clat (usec): min=363, max=42513, avg=14415.01, stdev=19258.69 00:21:56.701 lat (usec): min=370, max=42522, avg=14425.19, stdev=19258.65 00:21:56.701 clat percentiles (usec): 00:21:56.701 | 1.00th=[ 383], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 420], 00:21:56.701 | 30.00th=[ 433], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[ 490], 00:21:56.701 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:56.701 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:21:56.701 | 99.99th=[42730] 00:21:56.701 bw ( KiB/s): min= 800, max= 1632, per=56.97%, avg=1107.20, stdev=266.80, samples=20 00:21:56.701 iops : min= 200, max= 408, avg=276.80, stdev=66.70, samples=20 00:21:56.701 lat (usec) : 500=61.58%, 750=3.25%, 1000=0.54% 00:21:56.701 lat (msec) : 2=0.14%, 50=34.49% 00:21:56.701 cpu : usr=95.93%, sys=3.54%, ctx=30, majf=0, minf=0 00:21:56.701 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.701 issued rwts: total=2772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.701 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:56.701 00:21:56.701 Run status group 0 (all jobs): 00:21:56.701 READ: bw=1943KiB/s (1990kB/s), 836KiB/s-1107KiB/s (856kB/s-1134kB/s), io=19.0MiB (19.9MB), run=10010-10012msec 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 ************************************ 00:21:56.701 END TEST fio_dif_1_multi_subsystems 00:21:56.701 ************************************ 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.701 00:21:56.701 real 0m11.191s 00:21:56.701 user 0m19.840s 00:21:56.701 sys 0m1.139s 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 18:09:03 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:56.701 18:09:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:56.701 18:09:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 ************************************ 00:21:56.701 START TEST fio_dif_rand_params 00:21:56.701 ************************************ 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 bdev_null0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.701 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.702 [2024-07-24 18:09:03.413172] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:56.702 { 00:21:56.702 "params": { 00:21:56.702 "name": "Nvme$subsystem", 00:21:56.702 "trtype": "$TEST_TRANSPORT", 00:21:56.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.702 "adrfam": "ipv4", 00:21:56.702 "trsvcid": "$NVMF_PORT", 00:21:56.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.702 "hdgst": ${hdgst:-false}, 00:21:56.702 "ddgst": ${ddgst:-false} 00:21:56.702 }, 00:21:56.702 "method": "bdev_nvme_attach_controller" 00:21:56.702 } 00:21:56.702 EOF 00:21:56.702 )") 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:56.702 "params": { 00:21:56.702 "name": "Nvme0", 00:21:56.702 "trtype": "tcp", 00:21:56.702 "traddr": "10.0.0.2", 00:21:56.702 "adrfam": "ipv4", 00:21:56.702 "trsvcid": "4420", 00:21:56.702 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.702 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:56.702 "hdgst": false, 00:21:56.702 "ddgst": false 00:21:56.702 }, 00:21:56.702 "method": "bdev_nvme_attach_controller" 00:21:56.702 }' 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:56.702 18:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:56.702 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:56.702 ... 00:21:56.702 fio-3.35 00:21:56.702 Starting 3 threads 00:22:03.262 00:22:03.262 filename0: (groupid=0, jobs=1): err= 0: pid=97562: Wed Jul 24 18:09:09 2024 00:22:03.262 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(119MiB/5026msec) 00:22:03.262 slat (nsec): min=4684, max=44526, avg=15113.60, stdev=7365.58 00:22:03.262 clat (usec): min=3650, max=52737, avg=15875.62, stdev=15453.46 00:22:03.262 lat (usec): min=3659, max=52748, avg=15890.73, stdev=15453.09 00:22:03.262 clat percentiles (usec): 00:22:03.262 | 1.00th=[ 4113], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7439], 00:22:03.262 | 30.00th=[ 7963], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10290], 00:22:03.262 | 70.00th=[10683], 80.00th=[11338], 90.00th=[50070], 95.00th=[51119], 00:22:03.262 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:22:03.262 | 99.99th=[52691] 00:22:03.262 bw ( KiB/s): min=18432, max=33024, per=25.11%, avg=24185.40, stdev=5562.25, samples=10 00:22:03.262 iops : min= 144, max= 258, avg=188.90, stdev=43.37, samples=10 00:22:03.262 lat (msec) : 4=0.53%, 10=52.32%, 20=30.38%, 50=6.75%, 100=10.02% 00:22:03.262 cpu : usr=92.72%, sys=5.77%, ctx=25, majf=0, minf=0 00:22:03.262 IO depths : 1=9.2%, 2=90.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:03.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.262 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.262 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:03.262 filename0: (groupid=0, jobs=1): err= 0: pid=97563: Wed Jul 24 18:09:09 2024 00:22:03.262 read: IOPS=335, BW=42.0MiB/s (44.0MB/s)(211MiB/5031msec) 00:22:03.262 slat (nsec): min=4751, max=54012, avg=14095.60, stdev=9803.34 00:22:03.262 clat (usec): min=3818, max=49713, avg=8897.54, stdev=5292.02 00:22:03.262 lat (usec): min=3826, max=49741, avg=8911.64, stdev=5293.50 00:22:03.262 clat percentiles (usec): 00:22:03.262 | 1.00th=[ 3916], 5.00th=[ 4015], 10.00th=[ 4080], 20.00th=[ 4178], 00:22:03.262 | 30.00th=[ 6128], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:22:03.262 | 70.00th=[ 9896], 80.00th=[12780], 90.00th=[13698], 95.00th=[14484], 00:22:03.262 | 99.00th=[43779], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:22:03.262 | 99.99th=[49546] 00:22:03.262 bw ( KiB/s): min=33024, max=52224, per=44.81%, avg=43161.60, stdev=6279.05, samples=10 00:22:03.262 iops : min= 258, max= 408, avg=337.20, stdev=49.06, samples=10 00:22:03.262 lat (msec) : 4=3.43%, 10=67.08%, 20=28.42%, 50=1.07% 00:22:03.262 cpu : usr=91.39%, sys=6.80%, ctx=10, majf=0, minf=0 00:22:03.262 IO depths : 1=32.1%, 2=67.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:03.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.262 issued rwts: total=1689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.262 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:03.262 filename0: (groupid=0, jobs=1): err= 0: pid=97564: Wed Jul 24 18:09:09 2024 00:22:03.262 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(144MiB/5019msec) 00:22:03.262 slat (nsec): min=6728, max=48926, avg=14810.88, stdev=7025.78 00:22:03.262 clat (usec): min=3715, max=53985, avg=13077.74, stdev=12644.35 00:22:03.262 lat (usec): min=3735, max=53996, avg=13092.55, stdev=12644.33 00:22:03.262 clat percentiles (usec): 00:22:03.262 | 1.00th=[ 3916], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7111], 00:22:03.262 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 9241], 60.00th=[10421], 00:22:03.262 | 70.00th=[11076], 80.00th=[11731], 90.00th=[46924], 95.00th=[49546], 00:22:03.262 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53740], 00:22:03.262 | 99.99th=[53740] 00:22:03.262 bw ( KiB/s): min=23552, max=36864, per=30.45%, avg=29332.20, stdev=5299.25, samples=10 00:22:03.262 iops : min= 184, max= 288, avg=229.10, stdev=41.43, samples=10 00:22:03.262 lat (msec) : 4=2.00%, 10=53.44%, 20=34.12%, 50=6.01%, 100=4.44% 00:22:03.262 cpu : usr=93.12%, sys=5.62%, ctx=11, majf=0, minf=0 00:22:03.262 IO depths : 1=6.6%, 2=93.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:03.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.262 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.262 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:03.262 00:22:03.262 Run status group 0 (all jobs): 00:22:03.262 READ: bw=94.1MiB/s (98.6MB/s), 23.6MiB/s-42.0MiB/s (24.7MB/s-44.0MB/s), io=473MiB (496MB), run=5019-5031msec 00:22:03.262 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:03.262 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:03.262 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:03.262 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:03.262 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:03.262 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 bdev_null0 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 [2024-07-24 18:09:09.461430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 bdev_null1 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 bdev_null2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.263 { 00:22:03.263 "params": { 00:22:03.263 "name": "Nvme$subsystem", 00:22:03.263 "trtype": "$TEST_TRANSPORT", 00:22:03.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.263 "adrfam": "ipv4", 00:22:03.263 "trsvcid": "$NVMF_PORT", 00:22:03.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.263 "hdgst": ${hdgst:-false}, 00:22:03.263 "ddgst": ${ddgst:-false} 00:22:03.263 }, 00:22:03.263 "method": "bdev_nvme_attach_controller" 00:22:03.263 } 00:22:03.263 EOF 00:22:03.263 )") 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.263 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.263 { 00:22:03.263 "params": { 00:22:03.263 "name": "Nvme$subsystem", 00:22:03.263 "trtype": "$TEST_TRANSPORT", 00:22:03.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.263 "adrfam": "ipv4", 00:22:03.263 "trsvcid": "$NVMF_PORT", 00:22:03.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.263 "hdgst": ${hdgst:-false}, 00:22:03.263 "ddgst": ${ddgst:-false} 00:22:03.263 }, 00:22:03.263 "method": "bdev_nvme_attach_controller" 00:22:03.263 } 00:22:03.263 EOF 00:22:03.263 )") 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.264 { 00:22:03.264 "params": { 00:22:03.264 "name": "Nvme$subsystem", 00:22:03.264 "trtype": "$TEST_TRANSPORT", 00:22:03.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.264 "adrfam": "ipv4", 00:22:03.264 "trsvcid": "$NVMF_PORT", 00:22:03.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.264 "hdgst": ${hdgst:-false}, 00:22:03.264 "ddgst": ${ddgst:-false} 00:22:03.264 }, 00:22:03.264 "method": "bdev_nvme_attach_controller" 00:22:03.264 } 00:22:03.264 EOF 00:22:03.264 )") 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:03.264 "params": { 00:22:03.264 "name": "Nvme0", 00:22:03.264 "trtype": "tcp", 00:22:03.264 "traddr": "10.0.0.2", 00:22:03.264 "adrfam": "ipv4", 00:22:03.264 "trsvcid": "4420", 00:22:03.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.264 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:03.264 "hdgst": false, 00:22:03.264 "ddgst": false 00:22:03.264 }, 00:22:03.264 "method": "bdev_nvme_attach_controller" 00:22:03.264 },{ 00:22:03.264 "params": { 00:22:03.264 "name": "Nvme1", 00:22:03.264 "trtype": "tcp", 00:22:03.264 "traddr": "10.0.0.2", 00:22:03.264 "adrfam": "ipv4", 00:22:03.264 "trsvcid": "4420", 00:22:03.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.264 "hdgst": false, 00:22:03.264 "ddgst": false 00:22:03.264 }, 00:22:03.264 "method": "bdev_nvme_attach_controller" 00:22:03.264 },{ 00:22:03.264 "params": { 00:22:03.264 "name": "Nvme2", 00:22:03.264 "trtype": "tcp", 00:22:03.264 "traddr": "10.0.0.2", 00:22:03.264 "adrfam": "ipv4", 00:22:03.264 "trsvcid": "4420", 00:22:03.264 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.264 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:03.264 "hdgst": false, 00:22:03.264 "ddgst": false 00:22:03.264 }, 00:22:03.264 "method": "bdev_nvme_attach_controller" 00:22:03.264 }' 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:03.264 18:09:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:03.264 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:03.264 ... 00:22:03.264 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:03.264 ... 00:22:03.264 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:03.264 ... 00:22:03.264 fio-3.35 00:22:03.264 Starting 24 threads 00:22:15.456 00:22:15.456 filename0: (groupid=0, jobs=1): err= 0: pid=97659: Wed Jul 24 18:09:20 2024 00:22:15.456 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.79MiB/10027msec) 00:22:15.456 slat (usec): min=4, max=8026, avg=21.92, stdev=253.27 00:22:15.456 clat (msec): min=11, max=129, avg=63.89, stdev=22.33 00:22:15.456 lat (msec): min=11, max=129, avg=63.91, stdev=22.33 00:22:15.456 clat percentiles (msec): 00:22:15.456 | 1.00th=[ 13], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 46], 00:22:15.457 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 69], 00:22:15.457 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 107], 00:22:15.457 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 130], 99.95th=[ 130], 00:22:15.457 | 99.99th=[ 130] 00:22:15.457 bw ( KiB/s): min= 640, max= 1574, per=4.64%, avg=994.70, stdev=215.54, samples=20 00:22:15.457 iops : min= 160, max= 393, avg=248.65, stdev=53.81, samples=20 00:22:15.457 lat (msec) : 20=1.92%, 50=31.18%, 100=61.00%, 250=5.91% 00:22:15.457 cpu : usr=36.64%, sys=1.97%, ctx=1111, majf=0, minf=9 00:22:15.457 IO depths : 1=0.4%, 2=0.8%, 4=6.0%, 8=79.2%, 16=13.6%, 32=0.0%, >=64=0.0% 00:22:15.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 complete : 0=0.0%, 4=89.1%, 8=6.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 issued rwts: total=2505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.457 filename0: (groupid=0, jobs=1): err= 0: pid=97660: Wed Jul 24 18:09:20 2024 00:22:15.457 read: IOPS=248, BW=994KiB/s (1018kB/s)(9972KiB/10034msec) 00:22:15.457 slat (usec): min=3, max=4025, avg=17.58, stdev=148.91 00:22:15.457 clat (msec): min=20, max=135, avg=64.17, stdev=19.74 00:22:15.457 lat (msec): min=20, max=135, avg=64.19, stdev=19.74 00:22:15.457 clat percentiles (msec): 00:22:15.457 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 46], 00:22:15.457 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 68], 00:22:15.457 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 103], 00:22:15.457 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 136], 99.95th=[ 136], 00:22:15.457 | 99.99th=[ 136] 00:22:15.457 bw ( KiB/s): min= 704, max= 1296, per=4.64%, avg=994.40, stdev=148.67, samples=20 00:22:15.457 iops : min= 176, max= 324, avg=248.60, stdev=37.17, samples=20 00:22:15.457 lat (msec) : 50=31.53%, 100=62.90%, 250=5.58% 00:22:15.457 cpu : usr=43.12%, sys=2.22%, ctx=1289, majf=0, minf=9 00:22:15.457 IO depths : 1=1.0%, 2=2.2%, 4=9.0%, 8=75.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:22:15.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.457 filename0: (groupid=0, jobs=1): err= 0: pid=97661: Wed Jul 24 18:09:20 2024 00:22:15.457 read: IOPS=222, BW=890KiB/s (911kB/s)(8928KiB/10031msec) 00:22:15.457 slat (usec): min=3, max=8025, avg=20.38, stdev=221.07 00:22:15.457 clat (msec): min=26, max=198, avg=71.69, stdev=25.91 00:22:15.457 lat (msec): min=26, max=198, avg=71.71, stdev=25.91 00:22:15.457 clat percentiles (msec): 00:22:15.457 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 51], 00:22:15.457 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 72], 00:22:15.457 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 121], 00:22:15.457 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 199], 99.95th=[ 199], 00:22:15.457 | 99.99th=[ 199] 00:22:15.457 bw ( KiB/s): min= 472, max= 1280, per=4.15%, avg=888.20, stdev=206.34, samples=20 00:22:15.457 iops : min= 118, max= 320, avg=222.00, stdev=51.55, samples=20 00:22:15.457 lat (msec) : 50=18.95%, 100=69.00%, 250=12.05% 00:22:15.457 cpu : usr=40.85%, sys=2.31%, ctx=1502, majf=0, minf=9 00:22:15.457 IO depths : 1=1.8%, 2=3.6%, 4=12.0%, 8=71.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:22:15.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.457 filename0: (groupid=0, jobs=1): err= 0: pid=97662: Wed Jul 24 18:09:20 2024 00:22:15.457 read: IOPS=227, BW=908KiB/s (930kB/s)(9088KiB/10006msec) 00:22:15.457 slat (usec): min=4, max=12034, avg=29.77, stdev=394.08 00:22:15.457 clat (msec): min=31, max=155, avg=70.24, stdev=20.19 00:22:15.457 lat (msec): min=31, max=155, avg=70.27, stdev=20.20 00:22:15.457 clat percentiles (msec): 00:22:15.457 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 53], 00:22:15.457 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:22:15.457 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 99], 95.00th=[ 108], 00:22:15.457 | 99.00th=[ 126], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:22:15.457 | 99.99th=[ 157] 00:22:15.457 bw ( KiB/s): min= 736, max= 1280, per=4.25%, avg=909.37, stdev=137.02, samples=19 00:22:15.457 iops : min= 184, max= 320, avg=227.32, stdev=34.25, samples=19 00:22:15.457 lat (msec) : 50=18.05%, 100=74.03%, 250=7.92% 00:22:15.457 cpu : usr=37.65%, sys=1.85%, ctx=1047, majf=0, minf=9 00:22:15.457 IO depths : 1=1.9%, 2=4.3%, 4=13.5%, 8=69.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:22:15.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.457 filename0: (groupid=0, jobs=1): err= 0: pid=97663: Wed Jul 24 18:09:20 2024 00:22:15.457 read: IOPS=223, BW=893KiB/s (914kB/s)(8960KiB/10038msec) 00:22:15.457 slat (usec): min=4, max=8033, avg=22.80, stdev=293.26 00:22:15.457 clat (msec): min=20, max=166, avg=71.53, stdev=23.95 00:22:15.457 lat (msec): min=20, max=166, avg=71.55, stdev=23.96 00:22:15.457 clat percentiles (msec): 00:22:15.457 | 1.00th=[ 28], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 50], 00:22:15.457 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:22:15.457 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 121], 00:22:15.457 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 167], 00:22:15.457 | 99.99th=[ 167] 00:22:15.457 bw ( KiB/s): min= 512, max= 1125, per=4.15%, avg=889.10, stdev=156.70, samples=20 00:22:15.457 iops : min= 128, max= 281, avg=222.25, stdev=39.16, samples=20 00:22:15.457 lat (msec) : 50=20.80%, 100=68.17%, 250=11.03% 00:22:15.457 cpu : usr=31.79%, sys=1.46%, ctx=858, majf=0, minf=9 00:22:15.457 IO depths : 1=0.9%, 2=2.4%, 4=9.6%, 8=74.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:22:15.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.457 filename0: (groupid=0, jobs=1): err= 0: pid=97664: Wed Jul 24 18:09:20 2024 00:22:15.457 read: IOPS=254, BW=1019KiB/s (1043kB/s)(9.98MiB/10032msec) 00:22:15.457 slat (usec): min=3, max=4028, avg=13.38, stdev=79.65 00:22:15.457 clat (msec): min=6, max=137, avg=62.65, stdev=21.05 00:22:15.457 lat (msec): min=6, max=137, avg=62.66, stdev=21.05 00:22:15.457 clat percentiles (msec): 00:22:15.457 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 46], 00:22:15.457 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 66], 00:22:15.457 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 90], 95.00th=[ 103], 00:22:15.457 | 99.00th=[ 115], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 138], 00:22:15.457 | 99.99th=[ 138] 00:22:15.457 bw ( KiB/s): min= 688, max= 1269, per=4.75%, avg=1017.10, stdev=168.10, samples=20 00:22:15.457 iops : min= 172, max= 317, avg=254.25, stdev=41.99, samples=20 00:22:15.457 lat (msec) : 10=0.63%, 20=1.25%, 50=30.29%, 100=62.39%, 250=5.44% 00:22:15.457 cpu : usr=45.50%, sys=2.08%, ctx=1201, majf=0, minf=9 00:22:15.457 IO depths : 1=1.3%, 2=3.0%, 4=10.7%, 8=73.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:22:15.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 complete : 0=0.0%, 4=90.1%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 issued rwts: total=2555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.457 filename0: (groupid=0, jobs=1): err= 0: pid=97665: Wed Jul 24 18:09:20 2024 00:22:15.457 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10011msec) 00:22:15.457 slat (usec): min=6, max=4025, avg=14.38, stdev=111.14 00:22:15.457 clat (msec): min=20, max=165, avg=61.27, stdev=21.69 00:22:15.457 lat (msec): min=20, max=165, avg=61.29, stdev=21.69 00:22:15.457 clat percentiles (msec): 00:22:15.457 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 44], 00:22:15.457 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 64], 00:22:15.457 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 95], 95.00th=[ 106], 00:22:15.457 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:22:15.457 | 99.99th=[ 167] 00:22:15.457 bw ( KiB/s): min= 600, max= 1424, per=4.82%, avg=1031.16, stdev=233.00, samples=19 00:22:15.457 iops : min= 150, max= 356, avg=257.74, stdev=58.22, samples=19 00:22:15.457 lat (msec) : 50=38.01%, 100=55.13%, 250=6.86% 00:22:15.457 cpu : usr=44.98%, sys=2.08%, ctx=1204, majf=0, minf=9 00:22:15.457 IO depths : 1=1.0%, 2=2.1%, 4=8.9%, 8=75.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:22:15.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.457 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.457 filename0: (groupid=0, jobs=1): err= 0: pid=97666: Wed Jul 24 18:09:20 2024 00:22:15.457 read: IOPS=263, BW=1054KiB/s (1079kB/s)(10.3MiB/10034msec) 00:22:15.457 slat (usec): min=4, max=8077, avg=19.17, stdev=250.62 00:22:15.457 clat (msec): min=3, max=139, avg=60.57, stdev=21.80 00:22:15.457 lat (msec): min=3, max=139, avg=60.59, stdev=21.81 00:22:15.457 clat percentiles (msec): 00:22:15.457 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 39], 20.00th=[ 45], 00:22:15.457 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 65], 00:22:15.457 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 86], 95.00th=[ 99], 00:22:15.457 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 140], 99.95th=[ 140], 00:22:15.457 | 99.99th=[ 140] 00:22:15.457 bw ( KiB/s): min= 736, max= 1968, per=4.91%, avg=1050.45, stdev=297.86, samples=20 00:22:15.457 iops : min= 184, max= 492, avg=262.60, stdev=74.46, samples=20 00:22:15.457 lat (msec) : 4=1.21%, 10=2.04%, 20=1.82%, 50=27.01%, 100=63.87% 00:22:15.458 lat (msec) : 250=4.05% 00:22:15.458 cpu : usr=43.09%, sys=2.21%, ctx=1737, majf=0, minf=9 00:22:15.458 IO depths : 1=1.4%, 2=3.0%, 4=10.6%, 8=73.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:22:15.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 issued rwts: total=2643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.458 filename1: (groupid=0, jobs=1): err= 0: pid=97667: Wed Jul 24 18:09:20 2024 00:22:15.458 read: IOPS=198, BW=794KiB/s (813kB/s)(7948KiB/10016msec) 00:22:15.458 slat (usec): min=4, max=4038, avg=14.11, stdev=90.46 00:22:15.458 clat (msec): min=23, max=180, avg=80.52, stdev=25.58 00:22:15.458 lat (msec): min=23, max=180, avg=80.53, stdev=25.58 00:22:15.458 clat percentiles (msec): 00:22:15.458 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 49], 20.00th=[ 61], 00:22:15.458 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 85], 00:22:15.458 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 115], 95.00th=[ 127], 00:22:15.458 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 182], 99.95th=[ 182], 00:22:15.458 | 99.99th=[ 182] 00:22:15.458 bw ( KiB/s): min= 512, max= 1152, per=3.65%, avg=782.42, stdev=155.62, samples=19 00:22:15.458 iops : min= 128, max= 288, avg=195.58, stdev=38.92, samples=19 00:22:15.458 lat (msec) : 50=10.72%, 100=68.75%, 250=20.53% 00:22:15.458 cpu : usr=38.53%, sys=1.89%, ctx=1084, majf=0, minf=9 00:22:15.458 IO depths : 1=3.4%, 2=7.5%, 4=18.4%, 8=61.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:22:15.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 issued rwts: total=1987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.458 filename1: (groupid=0, jobs=1): err= 0: pid=97668: Wed Jul 24 18:09:20 2024 00:22:15.458 read: IOPS=204, BW=817KiB/s (836kB/s)(8180KiB/10015msec) 00:22:15.458 slat (usec): min=3, max=8052, avg=18.33, stdev=198.72 00:22:15.458 clat (msec): min=19, max=181, avg=78.23, stdev=23.12 00:22:15.458 lat (msec): min=19, max=181, avg=78.25, stdev=23.12 00:22:15.458 clat percentiles (msec): 00:22:15.458 | 1.00th=[ 28], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 61], 00:22:15.458 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 84], 00:22:15.458 | 70.00th=[ 88], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 120], 00:22:15.458 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 182], 99.95th=[ 182], 00:22:15.458 | 99.99th=[ 182] 00:22:15.458 bw ( KiB/s): min= 552, max= 1168, per=3.79%, avg=811.05, stdev=136.42, samples=19 00:22:15.458 iops : min= 138, max= 292, avg=202.74, stdev=34.12, samples=19 00:22:15.458 lat (msec) : 20=0.29%, 50=10.61%, 100=74.77%, 250=14.33% 00:22:15.458 cpu : usr=34.64%, sys=2.09%, ctx=998, majf=0, minf=9 00:22:15.458 IO depths : 1=1.9%, 2=4.2%, 4=13.3%, 8=69.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:22:15.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 issued rwts: total=2045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.458 filename1: (groupid=0, jobs=1): err= 0: pid=97669: Wed Jul 24 18:09:20 2024 00:22:15.458 read: IOPS=198, BW=795KiB/s (815kB/s)(7972KiB/10022msec) 00:22:15.458 slat (usec): min=4, max=3024, avg=13.28, stdev=67.67 00:22:15.458 clat (msec): min=23, max=167, avg=80.35, stdev=23.23 00:22:15.458 lat (msec): min=23, max=167, avg=80.37, stdev=23.23 00:22:15.458 clat percentiles (msec): 00:22:15.458 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 64], 00:22:15.458 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 83], 00:22:15.458 | 70.00th=[ 89], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 124], 00:22:15.458 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:22:15.458 | 99.99th=[ 169] 00:22:15.458 bw ( KiB/s): min= 544, max= 1104, per=3.69%, avg=790.85, stdev=154.94, samples=20 00:22:15.458 iops : min= 136, max= 276, avg=197.70, stdev=38.72, samples=20 00:22:15.458 lat (msec) : 50=7.98%, 100=73.36%, 250=18.67% 00:22:15.458 cpu : usr=36.31%, sys=1.96%, ctx=1329, majf=0, minf=9 00:22:15.458 IO depths : 1=1.7%, 2=3.9%, 4=12.8%, 8=69.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:22:15.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.458 filename1: (groupid=0, jobs=1): err= 0: pid=97670: Wed Jul 24 18:09:20 2024 00:22:15.458 read: IOPS=200, BW=802KiB/s (821kB/s)(8020KiB/10004msec) 00:22:15.458 slat (usec): min=4, max=8026, avg=32.22, stdev=399.77 00:22:15.458 clat (msec): min=26, max=180, avg=79.59, stdev=24.58 00:22:15.458 lat (msec): min=26, max=180, avg=79.62, stdev=24.59 00:22:15.458 clat percentiles (msec): 00:22:15.458 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 61], 00:22:15.458 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:22:15.458 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 121], 00:22:15.458 | 99.00th=[ 150], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 182], 00:22:15.458 | 99.99th=[ 182] 00:22:15.458 bw ( KiB/s): min= 512, max= 1024, per=3.69%, avg=789.95, stdev=157.68, samples=19 00:22:15.458 iops : min= 128, max= 256, avg=197.47, stdev=39.42, samples=19 00:22:15.458 lat (msec) : 50=9.63%, 100=72.22%, 250=18.15% 00:22:15.458 cpu : usr=31.60%, sys=1.60%, ctx=857, majf=0, minf=9 00:22:15.458 IO depths : 1=1.8%, 2=4.3%, 4=12.7%, 8=69.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:22:15.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.458 filename1: (groupid=0, jobs=1): err= 0: pid=97671: Wed Jul 24 18:09:20 2024 00:22:15.458 read: IOPS=243, BW=972KiB/s (996kB/s)(9732KiB/10008msec) 00:22:15.458 slat (usec): min=4, max=8053, avg=24.33, stdev=304.13 00:22:15.458 clat (msec): min=13, max=167, avg=65.65, stdev=25.17 00:22:15.458 lat (msec): min=13, max=167, avg=65.67, stdev=25.18 00:22:15.458 clat percentiles (msec): 00:22:15.458 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 45], 00:22:15.458 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 70], 00:22:15.458 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 111], 00:22:15.458 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 167], 00:22:15.458 | 99.99th=[ 167] 00:22:15.458 bw ( KiB/s): min= 640, max= 1504, per=4.47%, avg=957.89, stdev=237.07, samples=19 00:22:15.458 iops : min= 160, max= 376, avg=239.42, stdev=59.24, samples=19 00:22:15.458 lat (msec) : 20=0.90%, 50=33.37%, 100=55.12%, 250=10.60% 00:22:15.458 cpu : usr=38.65%, sys=2.01%, ctx=1349, majf=0, minf=9 00:22:15.458 IO depths : 1=0.7%, 2=1.5%, 4=8.1%, 8=76.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:22:15.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.458 filename1: (groupid=0, jobs=1): err= 0: pid=97672: Wed Jul 24 18:09:20 2024 00:22:15.458 read: IOPS=262, BW=1051KiB/s (1076kB/s)(10.3MiB/10031msec) 00:22:15.458 slat (usec): min=6, max=8040, avg=22.91, stdev=273.44 00:22:15.458 clat (msec): min=5, max=144, avg=60.72, stdev=21.38 00:22:15.458 lat (msec): min=5, max=144, avg=60.74, stdev=21.38 00:22:15.458 clat percentiles (msec): 00:22:15.458 | 1.00th=[ 6], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 46], 00:22:15.458 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 64], 00:22:15.458 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 96], 00:22:15.458 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:22:15.458 | 99.99th=[ 144] 00:22:15.458 bw ( KiB/s): min= 688, max= 1816, per=4.89%, avg=1047.25, stdev=242.19, samples=20 00:22:15.458 iops : min= 172, max= 454, avg=261.80, stdev=60.55, samples=20 00:22:15.458 lat (msec) : 10=1.48%, 20=1.21%, 50=36.36%, 100=57.31%, 250=3.64% 00:22:15.458 cpu : usr=39.63%, sys=1.75%, ctx=1053, majf=0, minf=9 00:22:15.458 IO depths : 1=0.6%, 2=1.2%, 4=6.2%, 8=78.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:22:15.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 complete : 0=0.0%, 4=89.2%, 8=6.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 issued rwts: total=2635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.458 filename1: (groupid=0, jobs=1): err= 0: pid=97673: Wed Jul 24 18:09:20 2024 00:22:15.458 read: IOPS=198, BW=794KiB/s (813kB/s)(7944KiB/10011msec) 00:22:15.458 slat (usec): min=5, max=3426, avg=13.58, stdev=76.77 00:22:15.458 clat (msec): min=22, max=181, avg=80.56, stdev=25.90 00:22:15.458 lat (msec): min=22, max=181, avg=80.57, stdev=25.90 00:22:15.458 clat percentiles (msec): 00:22:15.458 | 1.00th=[ 24], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 62], 00:22:15.458 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 85], 00:22:15.458 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 128], 00:22:15.458 | 99.00th=[ 150], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 182], 00:22:15.458 | 99.99th=[ 182] 00:22:15.458 bw ( KiB/s): min= 512, max= 1277, per=3.68%, avg=787.65, stdev=176.84, samples=20 00:22:15.458 iops : min= 128, max= 319, avg=196.90, stdev=44.17, samples=20 00:22:15.458 lat (msec) : 50=9.77%, 100=66.87%, 250=23.36% 00:22:15.458 cpu : usr=37.73%, sys=1.85%, ctx=1137, majf=0, minf=9 00:22:15.458 IO depths : 1=2.0%, 2=4.5%, 4=14.2%, 8=67.9%, 16=11.3%, 32=0.0%, >=64=0.0% 00:22:15.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.458 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.458 filename1: (groupid=0, jobs=1): err= 0: pid=97674: Wed Jul 24 18:09:20 2024 00:22:15.458 read: IOPS=232, BW=932KiB/s (954kB/s)(9336KiB/10018msec) 00:22:15.458 slat (usec): min=4, max=8027, avg=22.12, stdev=287.25 00:22:15.459 clat (msec): min=11, max=155, avg=68.57, stdev=25.09 00:22:15.459 lat (msec): min=11, max=155, avg=68.60, stdev=25.11 00:22:15.459 clat percentiles (msec): 00:22:15.459 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:22:15.459 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:22:15.459 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 113], 00:22:15.459 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 157], 00:22:15.459 | 99.99th=[ 157] 00:22:15.459 bw ( KiB/s): min= 560, max= 1536, per=4.33%, avg=927.05, stdev=229.81, samples=20 00:22:15.459 iops : min= 140, max= 384, avg=231.75, stdev=57.44, samples=20 00:22:15.459 lat (msec) : 20=2.06%, 50=27.12%, 100=61.78%, 250=9.04% 00:22:15.459 cpu : usr=35.36%, sys=1.87%, ctx=1118, majf=0, minf=9 00:22:15.459 IO depths : 1=1.7%, 2=3.6%, 4=11.4%, 8=71.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:22:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.459 filename2: (groupid=0, jobs=1): err= 0: pid=97675: Wed Jul 24 18:09:20 2024 00:22:15.459 read: IOPS=194, BW=777KiB/s (795kB/s)(7784KiB/10024msec) 00:22:15.459 slat (usec): min=6, max=8020, avg=16.05, stdev=181.60 00:22:15.459 clat (msec): min=26, max=177, avg=82.31, stdev=24.23 00:22:15.459 lat (msec): min=26, max=177, avg=82.33, stdev=24.23 00:22:15.459 clat percentiles (msec): 00:22:15.459 | 1.00th=[ 31], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 63], 00:22:15.459 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:22:15.459 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 116], 95.00th=[ 130], 00:22:15.459 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 178], 00:22:15.459 | 99.99th=[ 178] 00:22:15.459 bw ( KiB/s): min= 512, max= 1024, per=3.60%, avg=771.70, stdev=137.40, samples=20 00:22:15.459 iops : min= 128, max= 256, avg=192.90, stdev=34.35, samples=20 00:22:15.459 lat (msec) : 50=7.86%, 100=72.10%, 250=20.04% 00:22:15.459 cpu : usr=33.98%, sys=1.54%, ctx=960, majf=0, minf=9 00:22:15.459 IO depths : 1=2.3%, 2=5.4%, 4=17.0%, 8=64.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:22:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 complete : 0=0.0%, 4=91.5%, 8=2.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.459 filename2: (groupid=0, jobs=1): err= 0: pid=97676: Wed Jul 24 18:09:20 2024 00:22:15.459 read: IOPS=195, BW=782KiB/s (801kB/s)(7824KiB/10001msec) 00:22:15.459 slat (usec): min=4, max=8033, avg=26.89, stdev=333.52 00:22:15.459 clat (msec): min=7, max=168, avg=81.59, stdev=25.26 00:22:15.459 lat (msec): min=7, max=168, avg=81.62, stdev=25.26 00:22:15.459 clat percentiles (msec): 00:22:15.459 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 59], 20.00th=[ 61], 00:22:15.459 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:22:15.459 | 70.00th=[ 92], 80.00th=[ 104], 90.00th=[ 118], 95.00th=[ 134], 00:22:15.459 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:22:15.459 | 99.99th=[ 169] 00:22:15.459 bw ( KiB/s): min= 512, max= 1152, per=3.63%, avg=776.42, stdev=169.63, samples=19 00:22:15.459 iops : min= 128, max= 288, avg=194.11, stdev=42.41, samples=19 00:22:15.459 lat (msec) : 10=0.10%, 50=8.33%, 100=70.35%, 250=21.22% 00:22:15.459 cpu : usr=36.53%, sys=2.00%, ctx=1080, majf=0, minf=9 00:22:15.459 IO depths : 1=3.1%, 2=6.9%, 4=17.9%, 8=62.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:22:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 complete : 0=0.0%, 4=92.2%, 8=2.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.459 filename2: (groupid=0, jobs=1): err= 0: pid=97677: Wed Jul 24 18:09:20 2024 00:22:15.459 read: IOPS=209, BW=839KiB/s (860kB/s)(8420KiB/10031msec) 00:22:15.459 slat (usec): min=3, max=8033, avg=25.00, stdev=314.62 00:22:15.459 clat (msec): min=28, max=178, avg=76.08, stdev=24.92 00:22:15.459 lat (msec): min=28, max=178, avg=76.10, stdev=24.92 00:22:15.459 clat percentiles (msec): 00:22:15.459 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 56], 00:22:15.459 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 79], 00:22:15.459 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:22:15.459 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 180], 00:22:15.459 | 99.99th=[ 180] 00:22:15.459 bw ( KiB/s): min= 512, max= 1152, per=3.90%, avg=834.65, stdev=169.31, samples=20 00:22:15.459 iops : min= 128, max= 288, avg=208.60, stdev=42.32, samples=20 00:22:15.459 lat (msec) : 50=15.11%, 100=69.79%, 250=15.11% 00:22:15.459 cpu : usr=32.51%, sys=1.67%, ctx=936, majf=0, minf=9 00:22:15.459 IO depths : 1=1.1%, 2=2.7%, 4=11.4%, 8=72.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:22:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.459 filename2: (groupid=0, jobs=1): err= 0: pid=97678: Wed Jul 24 18:09:20 2024 00:22:15.459 read: IOPS=205, BW=823KiB/s (843kB/s)(8232KiB/10005msec) 00:22:15.459 slat (usec): min=4, max=8025, avg=22.18, stdev=282.60 00:22:15.459 clat (msec): min=18, max=203, avg=77.63, stdev=26.85 00:22:15.459 lat (msec): min=18, max=203, avg=77.66, stdev=26.85 00:22:15.459 clat percentiles (msec): 00:22:15.459 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:22:15.459 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:22:15.459 | 70.00th=[ 89], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 129], 00:22:15.459 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 205], 99.95th=[ 205], 00:22:15.459 | 99.99th=[ 205] 00:22:15.459 bw ( KiB/s): min= 512, max= 1142, per=3.76%, avg=805.37, stdev=166.31, samples=19 00:22:15.459 iops : min= 128, max= 285, avg=201.32, stdev=41.52, samples=19 00:22:15.459 lat (msec) : 20=0.29%, 50=14.38%, 100=66.57%, 250=18.76% 00:22:15.459 cpu : usr=31.86%, sys=1.77%, ctx=955, majf=0, minf=9 00:22:15.459 IO depths : 1=1.6%, 2=3.4%, 4=11.3%, 8=72.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:22:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.459 filename2: (groupid=0, jobs=1): err= 0: pid=97679: Wed Jul 24 18:09:20 2024 00:22:15.459 read: IOPS=192, BW=771KiB/s (790kB/s)(7720KiB/10009msec) 00:22:15.459 slat (usec): min=4, max=8028, avg=24.52, stdev=315.82 00:22:15.459 clat (msec): min=18, max=176, avg=82.81, stdev=26.15 00:22:15.459 lat (msec): min=18, max=176, avg=82.83, stdev=26.15 00:22:15.459 clat percentiles (msec): 00:22:15.459 | 1.00th=[ 27], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:22:15.459 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 87], 00:22:15.459 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 116], 95.00th=[ 132], 00:22:15.459 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:22:15.459 | 99.99th=[ 178] 00:22:15.459 bw ( KiB/s): min= 440, max= 1024, per=3.54%, avg=758.00, stdev=155.17, samples=19 00:22:15.459 iops : min= 110, max= 256, avg=189.47, stdev=38.81, samples=19 00:22:15.459 lat (msec) : 20=0.31%, 50=9.90%, 100=69.53%, 250=20.26% 00:22:15.459 cpu : usr=31.52%, sys=1.63%, ctx=856, majf=0, minf=9 00:22:15.459 IO depths : 1=2.2%, 2=4.9%, 4=14.6%, 8=67.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:22:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 complete : 0=0.0%, 4=91.2%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.459 filename2: (groupid=0, jobs=1): err= 0: pid=97680: Wed Jul 24 18:09:20 2024 00:22:15.459 read: IOPS=211, BW=846KiB/s (867kB/s)(8488KiB/10030msec) 00:22:15.459 slat (usec): min=6, max=8026, avg=17.24, stdev=194.30 00:22:15.459 clat (msec): min=34, max=191, avg=75.51, stdev=27.15 00:22:15.459 lat (msec): min=35, max=191, avg=75.53, stdev=27.14 00:22:15.459 clat percentiles (msec): 00:22:15.459 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 51], 00:22:15.459 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 75], 00:22:15.459 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 113], 95.00th=[ 133], 00:22:15.459 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 192], 99.95th=[ 192], 00:22:15.459 | 99.99th=[ 192] 00:22:15.459 bw ( KiB/s): min= 512, max= 1200, per=3.93%, avg=841.60, stdev=203.62, samples=20 00:22:15.459 iops : min= 128, max= 300, avg=210.35, stdev=50.85, samples=20 00:22:15.459 lat (msec) : 50=19.79%, 100=63.38%, 250=16.82% 00:22:15.459 cpu : usr=34.50%, sys=1.69%, ctx=1131, majf=0, minf=9 00:22:15.459 IO depths : 1=1.4%, 2=3.1%, 4=10.2%, 8=73.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:22:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.459 issued rwts: total=2122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.459 filename2: (groupid=0, jobs=1): err= 0: pid=97681: Wed Jul 24 18:09:20 2024 00:22:15.459 read: IOPS=245, BW=980KiB/s (1004kB/s)(9844KiB/10043msec) 00:22:15.459 slat (usec): min=4, max=4044, avg=14.56, stdev=114.79 00:22:15.459 clat (msec): min=6, max=154, avg=65.14, stdev=23.03 00:22:15.459 lat (msec): min=6, max=154, avg=65.15, stdev=23.03 00:22:15.459 clat percentiles (msec): 00:22:15.459 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:22:15.459 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 69], 00:22:15.459 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 111], 00:22:15.459 | 99.00th=[ 128], 99.50th=[ 128], 99.90th=[ 155], 99.95th=[ 155], 00:22:15.459 | 99.99th=[ 155] 00:22:15.459 bw ( KiB/s): min= 600, max= 1328, per=4.57%, avg=977.25, stdev=199.97, samples=20 00:22:15.459 iops : min= 150, max= 332, avg=244.30, stdev=49.99, samples=20 00:22:15.459 lat (msec) : 10=0.24%, 20=0.33%, 50=32.26%, 100=58.59%, 250=8.57% 00:22:15.460 cpu : usr=38.63%, sys=1.93%, ctx=1107, majf=0, minf=9 00:22:15.460 IO depths : 1=0.5%, 2=1.0%, 4=7.5%, 8=77.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:22:15.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.460 complete : 0=0.0%, 4=89.0%, 8=6.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.460 issued rwts: total=2461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.460 filename2: (groupid=0, jobs=1): err= 0: pid=97682: Wed Jul 24 18:09:20 2024 00:22:15.460 read: IOPS=218, BW=872KiB/s (893kB/s)(8756KiB/10040msec) 00:22:15.460 slat (usec): min=6, max=8021, avg=17.64, stdev=191.62 00:22:15.460 clat (msec): min=21, max=157, avg=73.18, stdev=23.94 00:22:15.460 lat (msec): min=21, max=157, avg=73.20, stdev=23.94 00:22:15.460 clat percentiles (msec): 00:22:15.460 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 55], 00:22:15.460 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:22:15.460 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 121], 00:22:15.460 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 159], 99.95th=[ 159], 00:22:15.460 | 99.99th=[ 159] 00:22:15.460 bw ( KiB/s): min= 640, max= 1221, per=4.07%, avg=870.70, stdev=166.45, samples=20 00:22:15.460 iops : min= 160, max= 305, avg=217.60, stdev=41.50, samples=20 00:22:15.460 lat (msec) : 50=16.49%, 100=68.62%, 250=14.89% 00:22:15.460 cpu : usr=33.74%, sys=1.70%, ctx=960, majf=0, minf=9 00:22:15.460 IO depths : 1=1.8%, 2=3.9%, 4=12.1%, 8=70.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:22:15.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.460 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.460 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:15.460 00:22:15.460 Run status group 0 (all jobs): 00:22:15.460 READ: bw=20.9MiB/s (21.9MB/s), 771KiB/s-1054KiB/s (790kB/s-1079kB/s), io=210MiB (220MB), run=10001-10043msec 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 bdev_null0 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 [2024-07-24 18:09:20.935863] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 bdev_null1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.460 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.460 { 00:22:15.460 "params": { 00:22:15.460 "name": "Nvme$subsystem", 00:22:15.460 "trtype": "$TEST_TRANSPORT", 00:22:15.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.460 "adrfam": "ipv4", 00:22:15.460 "trsvcid": "$NVMF_PORT", 00:22:15.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.460 "hdgst": ${hdgst:-false}, 00:22:15.460 "ddgst": ${ddgst:-false} 00:22:15.460 }, 00:22:15.461 "method": "bdev_nvme_attach_controller" 00:22:15.461 } 00:22:15.461 EOF 00:22:15.461 )") 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.461 { 00:22:15.461 "params": { 00:22:15.461 "name": "Nvme$subsystem", 00:22:15.461 "trtype": "$TEST_TRANSPORT", 00:22:15.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.461 "adrfam": "ipv4", 00:22:15.461 "trsvcid": "$NVMF_PORT", 00:22:15.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.461 "hdgst": ${hdgst:-false}, 00:22:15.461 "ddgst": ${ddgst:-false} 00:22:15.461 }, 00:22:15.461 "method": "bdev_nvme_attach_controller" 00:22:15.461 } 00:22:15.461 EOF 00:22:15.461 )") 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:15.461 18:09:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:15.461 "params": { 00:22:15.461 "name": "Nvme0", 00:22:15.461 "trtype": "tcp", 00:22:15.461 "traddr": "10.0.0.2", 00:22:15.461 "adrfam": "ipv4", 00:22:15.461 "trsvcid": "4420", 00:22:15.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:15.461 "hdgst": false, 00:22:15.461 "ddgst": false 00:22:15.461 }, 00:22:15.461 "method": "bdev_nvme_attach_controller" 00:22:15.461 },{ 00:22:15.461 "params": { 00:22:15.461 "name": "Nvme1", 00:22:15.461 "trtype": "tcp", 00:22:15.461 "traddr": "10.0.0.2", 00:22:15.461 "adrfam": "ipv4", 00:22:15.461 "trsvcid": "4420", 00:22:15.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.461 "hdgst": false, 00:22:15.461 "ddgst": false 00:22:15.461 }, 00:22:15.461 "method": "bdev_nvme_attach_controller" 00:22:15.461 }' 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:15.461 18:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:15.461 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:15.461 ... 00:22:15.461 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:15.461 ... 00:22:15.461 fio-3.35 00:22:15.461 Starting 4 threads 00:22:20.775 00:22:20.775 filename0: (groupid=0, jobs=1): err= 0: pid=97814: Wed Jul 24 18:09:26 2024 00:22:20.775 read: IOPS=2033, BW=15.9MiB/s (16.7MB/s)(79.4MiB/5001msec) 00:22:20.775 slat (nsec): min=3537, max=48544, avg=14250.97, stdev=4452.03 00:22:20.775 clat (usec): min=1056, max=6241, avg=3870.40, stdev=316.89 00:22:20.775 lat (usec): min=1081, max=6255, avg=3884.65, stdev=316.74 00:22:20.775 clat percentiles (usec): 00:22:20.775 | 1.00th=[ 2966], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3720], 00:22:20.775 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:22:20.775 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4555], 00:22:20.775 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5604], 99.95th=[ 6128], 00:22:20.775 | 99.99th=[ 6259] 00:22:20.775 bw ( KiB/s): min=15360, max=16768, per=25.05%, avg=16298.67, stdev=448.00, samples=9 00:22:20.775 iops : min= 1920, max= 2096, avg=2037.33, stdev=56.00, samples=9 00:22:20.775 lat (msec) : 2=0.10%, 4=83.86%, 10=16.04% 00:22:20.775 cpu : usr=91.68%, sys=7.26%, ctx=7, majf=0, minf=9 00:22:20.775 IO depths : 1=8.5%, 2=25.0%, 4=50.0%, 8=16.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.775 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.775 issued rwts: total=10168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.775 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:20.775 filename0: (groupid=0, jobs=1): err= 0: pid=97815: Wed Jul 24 18:09:26 2024 00:22:20.775 read: IOPS=2036, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5002msec) 00:22:20.775 slat (usec): min=6, max=162, avg=10.09, stdev= 4.48 00:22:20.775 clat (usec): min=1269, max=6864, avg=3882.41, stdev=284.92 00:22:20.775 lat (usec): min=1277, max=6880, avg=3892.50, stdev=284.92 00:22:20.775 clat percentiles (usec): 00:22:20.775 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3720], 00:22:20.775 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:22:20.775 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4490], 00:22:20.775 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 6063], 99.95th=[ 6063], 00:22:20.775 | 99.99th=[ 6652] 00:22:20.775 bw ( KiB/s): min=15390, max=16768, per=25.10%, avg=16332.22, stdev=451.28, samples=9 00:22:20.775 iops : min= 1923, max= 2096, avg=2041.44, stdev=56.61, samples=9 00:22:20.775 lat (msec) : 2=0.15%, 4=82.98%, 10=16.87% 00:22:20.775 cpu : usr=91.14%, sys=7.26%, ctx=80, majf=0, minf=0 00:22:20.775 IO depths : 1=6.9%, 2=20.1%, 4=54.8%, 8=18.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.775 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.775 issued rwts: total=10185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.775 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:20.775 filename1: (groupid=0, jobs=1): err= 0: pid=97816: Wed Jul 24 18:09:26 2024 00:22:20.775 read: IOPS=2031, BW=15.9MiB/s (16.6MB/s)(79.4MiB/5001msec) 00:22:20.775 slat (nsec): min=4813, max=58277, avg=14299.45, stdev=4686.93 00:22:20.775 clat (usec): min=1307, max=7383, avg=3869.34, stdev=407.45 00:22:20.775 lat (usec): min=1327, max=7393, avg=3883.64, stdev=407.27 00:22:20.775 clat percentiles (usec): 00:22:20.775 | 1.00th=[ 2704], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3687], 00:22:20.775 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:22:20.775 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4555], 00:22:20.775 | 99.00th=[ 5604], 99.50th=[ 5997], 99.90th=[ 6652], 99.95th=[ 6783], 00:22:20.775 | 99.99th=[ 7242] 00:22:20.775 bw ( KiB/s): min=15360, max=16768, per=25.05%, avg=16296.89, stdev=449.36, samples=9 00:22:20.775 iops : min= 1920, max= 2096, avg=2037.11, stdev=56.17, samples=9 00:22:20.775 lat (msec) : 2=0.11%, 4=84.29%, 10=15.60% 00:22:20.775 cpu : usr=92.46%, sys=6.36%, ctx=12, majf=0, minf=9 00:22:20.775 IO depths : 1=7.5%, 2=25.0%, 4=50.0%, 8=17.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.775 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.775 issued rwts: total=10160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.775 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:20.775 filename1: (groupid=0, jobs=1): err= 0: pid=97817: Wed Jul 24 18:09:26 2024 00:22:20.775 read: IOPS=2034, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5004msec) 00:22:20.775 slat (nsec): min=4228, max=48517, avg=10855.75, stdev=4222.47 00:22:20.775 clat (usec): min=1825, max=6056, avg=3890.37, stdev=348.58 00:22:20.775 lat (usec): min=1832, max=6069, avg=3901.23, stdev=348.48 00:22:20.775 clat percentiles (usec): 00:22:20.775 | 1.00th=[ 2933], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3720], 00:22:20.775 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:22:20.775 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4424], 95.00th=[ 4555], 00:22:20.775 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5866], 00:22:20.776 | 99.99th=[ 5866] 00:22:20.776 bw ( KiB/s): min=15408, max=16768, per=25.08%, avg=16320.00, stdev=455.58, samples=9 00:22:20.776 iops : min= 1926, max= 2096, avg=2040.00, stdev=56.95, samples=9 00:22:20.776 lat (msec) : 2=0.18%, 4=80.34%, 10=19.48% 00:22:20.776 cpu : usr=91.92%, sys=7.08%, ctx=5, majf=0, minf=9 00:22:20.776 IO depths : 1=3.1%, 2=9.9%, 4=65.1%, 8=21.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.776 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.776 issued rwts: total=10183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.776 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:20.776 00:22:20.776 Run status group 0 (all jobs): 00:22:20.776 READ: bw=63.5MiB/s (66.6MB/s), 15.9MiB/s-15.9MiB/s (16.6MB/s-16.7MB/s), io=318MiB (333MB), run=5001-5004msec 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.776 00:22:20.776 real 0m23.681s 00:22:20.776 user 2m3.878s 00:22:20.776 sys 0m7.730s 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 ************************************ 00:22:20.776 END TEST fio_dif_rand_params 00:22:20.776 ************************************ 00:22:20.776 18:09:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:20.776 18:09:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:20.776 18:09:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 ************************************ 00:22:20.776 START TEST fio_dif_digest 00:22:20.776 ************************************ 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 bdev_null0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:20.776 [2024-07-24 18:09:27.140922] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.776 { 00:22:20.776 "params": { 00:22:20.776 "name": "Nvme$subsystem", 00:22:20.776 "trtype": "$TEST_TRANSPORT", 00:22:20.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.776 "adrfam": "ipv4", 00:22:20.776 "trsvcid": "$NVMF_PORT", 00:22:20.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.776 "hdgst": ${hdgst:-false}, 00:22:20.776 "ddgst": ${ddgst:-false} 00:22:20.776 }, 00:22:20.776 "method": "bdev_nvme_attach_controller" 00:22:20.776 } 00:22:20.776 EOF 00:22:20.776 )") 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:22:20.776 18:09:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:20.776 "params": { 00:22:20.776 "name": "Nvme0", 00:22:20.776 "trtype": "tcp", 00:22:20.776 "traddr": "10.0.0.2", 00:22:20.776 "adrfam": "ipv4", 00:22:20.776 "trsvcid": "4420", 00:22:20.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:20.776 "hdgst": true, 00:22:20.776 "ddgst": true 00:22:20.776 }, 00:22:20.776 "method": "bdev_nvme_attach_controller" 00:22:20.776 }' 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:20.777 18:09:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.777 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:20.777 ... 00:22:20.777 fio-3.35 00:22:20.777 Starting 3 threads 00:22:33.101 00:22:33.101 filename0: (groupid=0, jobs=1): err= 0: pid=97923: Wed Jul 24 18:09:37 2024 00:22:33.101 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(325MiB/10005msec) 00:22:33.101 slat (nsec): min=4301, max=71437, avg=15051.05, stdev=4782.65 00:22:33.101 clat (usec): min=8683, max=54474, avg=11538.74, stdev=3224.69 00:22:33.101 lat (usec): min=8697, max=54497, avg=11553.79, stdev=3224.95 00:22:33.101 clat percentiles (usec): 00:22:33.101 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:22:33.101 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:22:33.101 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:22:33.102 | 99.00th=[13698], 99.50th=[51643], 99.90th=[53740], 99.95th=[54264], 00:22:33.102 | 99.99th=[54264] 00:22:33.102 bw ( KiB/s): min=27392, max=35072, per=38.31%, avg=33216.00, stdev=1879.14, samples=20 00:22:33.102 iops : min= 214, max= 274, avg=259.50, stdev=14.68, samples=20 00:22:33.102 lat (msec) : 10=4.47%, 20=94.96%, 100=0.58% 00:22:33.102 cpu : usr=90.35%, sys=8.29%, ctx=12, majf=0, minf=0 00:22:33.102 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:33.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.102 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.102 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:33.102 filename0: (groupid=0, jobs=1): err= 0: pid=97924: Wed Jul 24 18:09:37 2024 00:22:33.102 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(289MiB/10045msec) 00:22:33.102 slat (nsec): min=6920, max=48724, avg=14773.51, stdev=4676.24 00:22:33.102 clat (usec): min=7283, max=47397, avg=12999.96, stdev=1674.61 00:22:33.102 lat (usec): min=7310, max=47416, avg=13014.73, stdev=1674.85 00:22:33.102 clat percentiles (usec): 00:22:33.102 | 1.00th=[ 8356], 5.00th=[10945], 10.00th=[11600], 20.00th=[11994], 00:22:33.102 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:22:33.102 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14615], 95.00th=[15008], 00:22:33.102 | 99.00th=[15926], 99.50th=[16581], 99.90th=[17433], 99.95th=[46924], 00:22:33.102 | 99.99th=[47449] 00:22:33.102 bw ( KiB/s): min=28160, max=30976, per=34.09%, avg=29555.20, stdev=754.29, samples=20 00:22:33.102 iops : min= 220, max= 242, avg=230.90, stdev= 5.89, samples=20 00:22:33.102 lat (msec) : 10=2.86%, 20=97.06%, 50=0.09% 00:22:33.102 cpu : usr=90.61%, sys=8.05%, ctx=37, majf=0, minf=0 00:22:33.102 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:33.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.102 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.102 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:33.102 filename0: (groupid=0, jobs=1): err= 0: pid=97925: Wed Jul 24 18:09:37 2024 00:22:33.102 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(237MiB/10003msec) 00:22:33.102 slat (nsec): min=6898, max=54411, avg=14439.37, stdev=4797.56 00:22:33.102 clat (usec): min=4603, max=19201, avg=15806.45, stdev=1412.10 00:22:33.102 lat (usec): min=4619, max=19214, avg=15820.89, stdev=1412.69 00:22:33.102 clat percentiles (usec): 00:22:33.102 | 1.00th=[ 9372], 5.00th=[14091], 10.00th=[14746], 20.00th=[15270], 00:22:33.102 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16188], 00:22:33.102 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17171], 95.00th=[17433], 00:22:33.102 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19006], 99.95th=[19268], 00:22:33.102 | 99.99th=[19268] 00:22:33.102 bw ( KiB/s): min=23040, max=26112, per=27.99%, avg=24266.11, stdev=780.13, samples=19 00:22:33.102 iops : min= 180, max= 204, avg=189.58, stdev= 6.09, samples=19 00:22:33.102 lat (msec) : 10=2.32%, 20=97.68% 00:22:33.102 cpu : usr=90.66%, sys=8.24%, ctx=118, majf=0, minf=0 00:22:33.102 IO depths : 1=5.5%, 2=94.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:33.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.102 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.102 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:33.102 00:22:33.102 Run status group 0 (all jobs): 00:22:33.102 READ: bw=84.7MiB/s (88.8MB/s), 23.7MiB/s-32.4MiB/s (24.8MB/s-34.0MB/s), io=851MiB (892MB), run=10003-10045msec 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.102 00:22:33.102 real 0m10.985s 00:22:33.102 user 0m27.840s 00:22:33.102 sys 0m2.741s 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:33.102 18:09:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:33.102 ************************************ 00:22:33.102 END TEST fio_dif_digest 00:22:33.102 ************************************ 00:22:33.102 18:09:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:33.102 18:09:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.102 rmmod nvme_tcp 00:22:33.102 rmmod nvme_fabrics 00:22:33.102 rmmod nvme_keyring 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97156 ']' 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97156 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 97156 ']' 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 97156 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97156 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.102 killing process with pid 97156 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97156' 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@969 -- # kill 97156 00:22:33.102 18:09:38 nvmf_dif -- common/autotest_common.sh@974 -- # wait 97156 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:22:33.102 18:09:38 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:33.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:33.102 Waiting for block devices as requested 00:22:33.102 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:33.102 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:33.102 18:09:39 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.102 18:09:39 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.102 18:09:39 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.102 18:09:39 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.102 18:09:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.102 18:09:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:33.102 18:09:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.102 18:09:39 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:33.102 00:22:33.102 real 1m0.378s 00:22:33.102 user 3m47.275s 00:22:33.102 sys 0m20.123s 00:22:33.102 18:09:39 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:33.102 ************************************ 00:22:33.102 END TEST nvmf_dif 00:22:33.102 ************************************ 00:22:33.102 18:09:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:33.102 18:09:39 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:33.102 18:09:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:33.102 18:09:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:33.102 18:09:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.102 ************************************ 00:22:33.102 START TEST nvmf_abort_qd_sizes 00:22:33.102 ************************************ 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:33.102 * Looking for test storage... 00:22:33.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.102 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:33.103 Cannot find device "nvmf_tgt_br" 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:33.103 Cannot find device "nvmf_tgt_br2" 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:33.103 Cannot find device "nvmf_tgt_br" 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:33.103 Cannot find device "nvmf_tgt_br2" 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:33.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:33.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:33.103 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:33.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:22:33.104 00:22:33.104 --- 10.0.0.2 ping statistics --- 00:22:33.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.104 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:33.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:33.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:33.104 00:22:33.104 --- 10.0.0.3 ping statistics --- 00:22:33.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.104 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:33.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:33.104 00:22:33.104 --- 10.0.0.1 ping statistics --- 00:22:33.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.104 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:33.104 18:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:33.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:33.928 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:33.928 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98523 00:22:33.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98523 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98523 ']' 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.928 18:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:34.186 [2024-07-24 18:09:40.904930] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:22:34.186 [2024-07-24 18:09:40.905234] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.186 [2024-07-24 18:09:41.043187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.446 [2024-07-24 18:09:41.164296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.446 [2024-07-24 18:09:41.164582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.446 [2024-07-24 18:09:41.164838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.446 [2024-07-24 18:09:41.165039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.446 [2024-07-24 18:09:41.165189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.446 [2024-07-24 18:09:41.165559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.446 [2024-07-24 18:09:41.165637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.446 [2024-07-24 18:09:41.165702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.446 [2024-07-24 18:09:41.165713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:35.012 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:22:35.270 18:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.270 18:09:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:35.270 ************************************ 00:22:35.270 START TEST spdk_target_abort 00:22:35.270 ************************************ 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.270 spdk_targetn1 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.270 [2024-07-24 18:09:42.103634] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.270 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.271 [2024-07-24 18:09:42.131887] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:35.271 18:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:38.580 Initializing NVMe Controllers 00:22:38.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:38.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:38.580 Initialization complete. Launching workers. 00:22:38.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11710, failed: 0 00:22:38.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1101, failed to submit 10609 00:22:38.580 success 739, unsuccess 362, failed 0 00:22:38.580 18:09:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:38.580 18:09:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:41.872 Initializing NVMe Controllers 00:22:41.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:41.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:41.872 Initialization complete. Launching workers. 00:22:41.872 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5883, failed: 0 00:22:41.872 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1269, failed to submit 4614 00:22:41.872 success 259, unsuccess 1010, failed 0 00:22:41.872 18:09:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:41.872 18:09:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:45.153 Initializing NVMe Controllers 00:22:45.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:45.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:45.153 Initialization complete. Launching workers. 00:22:45.153 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31069, failed: 0 00:22:45.153 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2674, failed to submit 28395 00:22:45.153 success 467, unsuccess 2207, failed 0 00:22:45.153 18:09:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:45.153 18:09:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.153 18:09:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:45.153 18:09:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.153 18:09:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:45.153 18:09:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.153 18:09:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98523 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98523 ']' 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98523 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98523 00:22:46.088 killing process with pid 98523 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98523' 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98523 00:22:46.088 18:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98523 00:22:46.347 00:22:46.347 real 0m11.171s 00:22:46.347 user 0m44.493s 00:22:46.347 sys 0m2.225s 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:46.347 ************************************ 00:22:46.347 END TEST spdk_target_abort 00:22:46.347 ************************************ 00:22:46.347 18:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:46.347 18:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:46.347 18:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.347 18:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:46.347 ************************************ 00:22:46.347 START TEST kernel_target_abort 00:22:46.347 ************************************ 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:46.347 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:46.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:46.911 Waiting for block devices as requested 00:22:46.911 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:46.911 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:47.169 No valid GPT data, bailing 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:47.169 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:22:47.170 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:47.170 18:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:47.170 No valid GPT data, bailing 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:47.170 No valid GPT data, bailing 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:47.170 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:47.428 No valid GPT data, bailing 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee --hostid=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee -a 10.0.0.1 -t tcp -s 4420 00:22:47.428 00:22:47.428 Discovery Log Number of Records 2, Generation counter 2 00:22:47.428 =====Discovery Log Entry 0====== 00:22:47.428 trtype: tcp 00:22:47.428 adrfam: ipv4 00:22:47.428 subtype: current discovery subsystem 00:22:47.428 treq: not specified, sq flow control disable supported 00:22:47.428 portid: 1 00:22:47.428 trsvcid: 4420 00:22:47.428 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:47.428 traddr: 10.0.0.1 00:22:47.428 eflags: none 00:22:47.428 sectype: none 00:22:47.428 =====Discovery Log Entry 1====== 00:22:47.428 trtype: tcp 00:22:47.428 adrfam: ipv4 00:22:47.428 subtype: nvme subsystem 00:22:47.428 treq: not specified, sq flow control disable supported 00:22:47.428 portid: 1 00:22:47.428 trsvcid: 4420 00:22:47.428 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:47.428 traddr: 10.0.0.1 00:22:47.428 eflags: none 00:22:47.428 sectype: none 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:47.428 18:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:50.706 Initializing NVMe Controllers 00:22:50.706 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:50.706 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:50.706 Initialization complete. Launching workers. 00:22:50.706 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36306, failed: 0 00:22:50.706 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36306, failed to submit 0 00:22:50.706 success 0, unsuccess 36306, failed 0 00:22:50.706 18:09:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:50.706 18:09:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:54.017 Initializing NVMe Controllers 00:22:54.017 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:54.017 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:54.017 Initialization complete. Launching workers. 00:22:54.017 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74377, failed: 0 00:22:54.017 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32864, failed to submit 41513 00:22:54.017 success 0, unsuccess 32864, failed 0 00:22:54.017 18:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:54.017 18:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:57.310 Initializing NVMe Controllers 00:22:57.310 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:57.310 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:57.310 Initialization complete. Launching workers. 00:22:57.310 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89313, failed: 0 00:22:57.310 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22296, failed to submit 67017 00:22:57.310 success 0, unsuccess 22296, failed 0 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:57.310 18:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:57.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:00.410 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:00.410 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:00.410 00:23:00.410 real 0m13.733s 00:23:00.410 user 0m6.480s 00:23:00.410 sys 0m4.712s 00:23:00.410 18:10:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:00.410 ************************************ 00:23:00.410 END TEST kernel_target_abort 00:23:00.410 18:10:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:00.410 ************************************ 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.410 rmmod nvme_tcp 00:23:00.410 rmmod nvme_fabrics 00:23:00.410 rmmod nvme_keyring 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.410 Process with pid 98523 is not found 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98523 ']' 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98523 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98523 ']' 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98523 00:23:00.410 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98523) - No such process 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98523 is not found' 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:00.410 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:00.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:00.668 Waiting for block devices as requested 00:23:00.668 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:00.926 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:00.926 00:23:00.926 real 0m28.565s 00:23:00.926 user 0m52.305s 00:23:00.926 sys 0m8.620s 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:00.926 18:10:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:00.926 ************************************ 00:23:00.926 END TEST nvmf_abort_qd_sizes 00:23:00.926 ************************************ 00:23:00.926 18:10:07 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:00.926 18:10:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:00.926 18:10:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:00.926 18:10:07 -- common/autotest_common.sh@10 -- # set +x 00:23:00.926 ************************************ 00:23:00.926 START TEST keyring_file 00:23:00.926 ************************************ 00:23:00.926 18:10:07 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:01.186 * Looking for test storage... 00:23:01.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:01.187 18:10:07 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:01.187 18:10:07 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.187 18:10:07 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.187 18:10:07 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.187 18:10:07 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.187 18:10:07 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.187 18:10:07 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.187 18:10:07 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:01.187 18:10:07 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@47 -- # : 0 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:01.187 18:10:07 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:01.187 18:10:07 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:01.187 18:10:07 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:01.187 18:10:07 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:01.187 18:10:07 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:01.187 18:10:07 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EEMAQ1Cbxu 00:23:01.187 18:10:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:01.187 18:10:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EEMAQ1Cbxu 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EEMAQ1Cbxu 00:23:01.187 18:10:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.EEMAQ1Cbxu 00:23:01.187 18:10:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VtwePWQPQF 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:01.187 18:10:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:01.187 18:10:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.187 18:10:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:01.187 18:10:08 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:01.187 18:10:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:01.187 18:10:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VtwePWQPQF 00:23:01.187 18:10:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VtwePWQPQF 00:23:01.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.187 18:10:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VtwePWQPQF 00:23:01.187 18:10:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=99412 00:23:01.187 18:10:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99412 00:23:01.187 18:10:08 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99412 ']' 00:23:01.187 18:10:08 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.187 18:10:08 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.187 18:10:08 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.187 18:10:08 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:01.187 18:10:08 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.187 18:10:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:01.475 [2024-07-24 18:10:08.173376] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:23:01.475 [2024-07-24 18:10:08.173488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99412 ] 00:23:01.475 [2024-07-24 18:10:08.315602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.475 [2024-07-24 18:10:08.444921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:23:02.409 18:10:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:02.409 [2024-07-24 18:10:09.072918] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.409 null0 00:23:02.409 [2024-07-24 18:10:09.104899] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.409 [2024-07-24 18:10:09.105147] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:02.409 [2024-07-24 18:10:09.112887] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.409 18:10:09 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:02.409 [2024-07-24 18:10:09.124891] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:02.409 2024/07/24 18:10:09 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:02.409 request: 00:23:02.409 { 00:23:02.409 "method": "nvmf_subsystem_add_listener", 00:23:02.409 "params": { 00:23:02.409 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:02.409 "secure_channel": false, 00:23:02.409 "listen_address": { 00:23:02.409 "trtype": "tcp", 00:23:02.409 "traddr": "127.0.0.1", 00:23:02.409 "trsvcid": "4420" 00:23:02.409 } 00:23:02.409 } 00:23:02.409 } 00:23:02.409 Got JSON-RPC error response 00:23:02.409 GoRPCClient: error on JSON-RPC call 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:02.409 18:10:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:02.410 18:10:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:02.410 18:10:09 keyring_file -- keyring/file.sh@46 -- # bperfpid=99443 00:23:02.410 18:10:09 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99443 /var/tmp/bperf.sock 00:23:02.410 18:10:09 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:02.410 18:10:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99443 ']' 00:23:02.410 18:10:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:02.410 18:10:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:02.410 18:10:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:02.410 18:10:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.410 18:10:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:02.410 [2024-07-24 18:10:09.190412] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:23:02.410 [2024-07-24 18:10:09.190744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99443 ] 00:23:02.410 [2024-07-24 18:10:09.328908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.667 [2024-07-24 18:10:09.447041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.277 18:10:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.277 18:10:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:23:03.277 18:10:10 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EEMAQ1Cbxu 00:23:03.277 18:10:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EEMAQ1Cbxu 00:23:03.536 18:10:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VtwePWQPQF 00:23:03.536 18:10:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VtwePWQPQF 00:23:03.795 18:10:10 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:23:03.795 18:10:10 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:23:03.795 18:10:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:03.795 18:10:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:03.795 18:10:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:04.054 18:10:10 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.EEMAQ1Cbxu == \/\t\m\p\/\t\m\p\.\E\E\M\A\Q\1\C\b\x\u ]] 00:23:04.054 18:10:10 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:23:04.054 18:10:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:04.054 18:10:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:04.054 18:10:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:04.054 18:10:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:04.313 18:10:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.VtwePWQPQF == \/\t\m\p\/\t\m\p\.\V\t\w\e\P\W\Q\P\Q\F ]] 00:23:04.313 18:10:11 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:23:04.313 18:10:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:04.313 18:10:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:04.313 18:10:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:04.313 18:10:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:04.313 18:10:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:04.572 18:10:11 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:04.572 18:10:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:23:04.572 18:10:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:04.572 18:10:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:04.572 18:10:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:04.572 18:10:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:04.572 18:10:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:04.831 18:10:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:04.831 18:10:11 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:04.831 18:10:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:04.831 [2024-07-24 18:10:11.757090] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.092 nvme0n1 00:23:05.092 18:10:11 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:23:05.092 18:10:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:05.092 18:10:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:05.092 18:10:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:05.092 18:10:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:05.092 18:10:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:05.349 18:10:12 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:05.349 18:10:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:23:05.349 18:10:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:05.349 18:10:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:05.349 18:10:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:05.349 18:10:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:05.349 18:10:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:05.606 18:10:12 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:05.606 18:10:12 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:05.606 Running I/O for 1 seconds... 00:23:06.989 00:23:06.989 Latency(us) 00:23:06.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.989 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:06.989 nvme0n1 : 1.01 13201.34 51.57 0.00 0.00 9669.84 4962.01 20971.52 00:23:06.989 =================================================================================================================== 00:23:06.989 Total : 13201.34 51.57 0.00 0.00 9669.84 4962.01 20971.52 00:23:06.989 0 00:23:06.989 18:10:13 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:06.989 18:10:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:06.989 18:10:13 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:23:06.989 18:10:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:06.989 18:10:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:06.989 18:10:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:06.989 18:10:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:06.989 18:10:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.247 18:10:14 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:07.247 18:10:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:23:07.247 18:10:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:07.247 18:10:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:07.247 18:10:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:07.247 18:10:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.247 18:10:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.505 18:10:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:07.505 18:10:14 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:07.505 18:10:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:07.505 18:10:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:07.505 18:10:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:07.505 18:10:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.505 18:10:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:07.505 18:10:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.505 18:10:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:07.505 18:10:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:07.763 [2024-07-24 18:10:14.640005] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.764 [2024-07-24 18:10:14.640662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe11f30 (107): Transport endpoint is not connected 00:23:07.764 [2024-07-24 18:10:14.641629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe11f30 (9): Bad file descriptor 00:23:07.764 [2024-07-24 18:10:14.642622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.764 [2024-07-24 18:10:14.642661] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:07.764 [2024-07-24 18:10:14.642675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.764 2024/07/24 18:10:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:07.764 request: 00:23:07.764 { 00:23:07.764 "method": "bdev_nvme_attach_controller", 00:23:07.764 "params": { 00:23:07.764 "name": "nvme0", 00:23:07.764 "trtype": "tcp", 00:23:07.764 "traddr": "127.0.0.1", 00:23:07.764 "adrfam": "ipv4", 00:23:07.764 "trsvcid": "4420", 00:23:07.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:07.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:07.764 "prchk_reftag": false, 00:23:07.764 "prchk_guard": false, 00:23:07.764 "hdgst": false, 00:23:07.764 "ddgst": false, 00:23:07.764 "psk": "key1" 00:23:07.764 } 00:23:07.764 } 00:23:07.764 Got JSON-RPC error response 00:23:07.764 GoRPCClient: error on JSON-RPC call 00:23:07.764 18:10:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:07.764 18:10:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.764 18:10:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.764 18:10:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.764 18:10:14 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:23:07.764 18:10:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:07.764 18:10:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:07.764 18:10:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.764 18:10:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:07.764 18:10:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.022 18:10:14 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:08.022 18:10:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:23:08.022 18:10:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:08.022 18:10:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:08.022 18:10:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.022 18:10:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.022 18:10:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:08.281 18:10:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:08.281 18:10:15 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:08.281 18:10:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:08.574 18:10:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:08.574 18:10:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:08.840 18:10:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:08.840 18:10:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.840 18:10:15 keyring_file -- keyring/file.sh@77 -- # jq length 00:23:09.098 18:10:15 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:09.098 18:10:15 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.EEMAQ1Cbxu 00:23:09.098 18:10:15 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.EEMAQ1Cbxu 00:23:09.098 18:10:15 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:09.098 18:10:15 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.EEMAQ1Cbxu 00:23:09.098 18:10:15 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:09.098 18:10:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.098 18:10:15 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:09.098 18:10:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.098 18:10:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EEMAQ1Cbxu 00:23:09.098 18:10:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EEMAQ1Cbxu 00:23:09.356 [2024-07-24 18:10:16.197635] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EEMAQ1Cbxu': 0100660 00:23:09.356 [2024-07-24 18:10:16.197681] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:09.356 2024/07/24 18:10:16 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.EEMAQ1Cbxu], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:23:09.356 request: 00:23:09.356 { 00:23:09.356 "method": "keyring_file_add_key", 00:23:09.356 "params": { 00:23:09.356 "name": "key0", 00:23:09.356 "path": "/tmp/tmp.EEMAQ1Cbxu" 00:23:09.356 } 00:23:09.356 } 00:23:09.356 Got JSON-RPC error response 00:23:09.356 GoRPCClient: error on JSON-RPC call 00:23:09.356 18:10:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:09.356 18:10:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.356 18:10:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.356 18:10:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.356 18:10:16 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.EEMAQ1Cbxu 00:23:09.356 18:10:16 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EEMAQ1Cbxu 00:23:09.356 18:10:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EEMAQ1Cbxu 00:23:09.614 18:10:16 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.EEMAQ1Cbxu 00:23:09.614 18:10:16 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:23:09.614 18:10:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:09.614 18:10:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:09.614 18:10:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:09.614 18:10:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:09.614 18:10:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:09.872 18:10:16 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:09.872 18:10:16 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:09.872 18:10:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:09.872 18:10:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:09.872 18:10:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:09.872 18:10:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.872 18:10:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:09.872 18:10:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.872 18:10:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:09.872 18:10:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:10.130 [2024-07-24 18:10:17.013820] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.EEMAQ1Cbxu': No such file or directory 00:23:10.130 [2024-07-24 18:10:17.013869] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:10.130 [2024-07-24 18:10:17.013896] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:10.130 [2024-07-24 18:10:17.013905] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:10.130 [2024-07-24 18:10:17.013915] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:10.131 2024/07/24 18:10:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:23:10.131 request: 00:23:10.131 { 00:23:10.131 "method": "bdev_nvme_attach_controller", 00:23:10.131 "params": { 00:23:10.131 "name": "nvme0", 00:23:10.131 "trtype": "tcp", 00:23:10.131 "traddr": "127.0.0.1", 00:23:10.131 "adrfam": "ipv4", 00:23:10.131 "trsvcid": "4420", 00:23:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:10.131 "prchk_reftag": false, 00:23:10.131 "prchk_guard": false, 00:23:10.131 "hdgst": false, 00:23:10.131 "ddgst": false, 00:23:10.131 "psk": "key0" 00:23:10.131 } 00:23:10.131 } 00:23:10.131 Got JSON-RPC error response 00:23:10.131 GoRPCClient: error on JSON-RPC call 00:23:10.131 18:10:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:10.131 18:10:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.131 18:10:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.131 18:10:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.131 18:10:17 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:10.131 18:10:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:10.388 18:10:17 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:10.388 18:10:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:10.388 18:10:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:10.388 18:10:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:10.388 18:10:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:10.388 18:10:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:10.388 18:10:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PJlbQgbXLG 00:23:10.388 18:10:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:10.388 18:10:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:10.388 18:10:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:10.388 18:10:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:10.388 18:10:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:10.388 18:10:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:10.388 18:10:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:10.646 18:10:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PJlbQgbXLG 00:23:10.646 18:10:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PJlbQgbXLG 00:23:10.646 18:10:17 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.PJlbQgbXLG 00:23:10.646 18:10:17 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PJlbQgbXLG 00:23:10.646 18:10:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PJlbQgbXLG 00:23:10.904 18:10:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:10.904 18:10:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:11.196 nvme0n1 00:23:11.196 18:10:17 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:23:11.196 18:10:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:11.196 18:10:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:11.196 18:10:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.196 18:10:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:11.196 18:10:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.459 18:10:18 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:11.459 18:10:18 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:11.459 18:10:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:11.721 18:10:18 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:23:11.721 18:10:18 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:23:11.721 18:10:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:11.721 18:10:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.721 18:10:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.984 18:10:18 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:11.984 18:10:18 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:23:11.984 18:10:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:11.984 18:10:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:11.984 18:10:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.984 18:10:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.984 18:10:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:12.246 18:10:19 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:12.246 18:10:19 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:12.246 18:10:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:12.506 18:10:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:12.506 18:10:19 keyring_file -- keyring/file.sh@104 -- # jq length 00:23:12.506 18:10:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.768 18:10:19 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:12.769 18:10:19 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PJlbQgbXLG 00:23:12.769 18:10:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PJlbQgbXLG 00:23:13.029 18:10:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VtwePWQPQF 00:23:13.029 18:10:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VtwePWQPQF 00:23:13.029 18:10:19 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:13.029 18:10:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:13.596 nvme0n1 00:23:13.596 18:10:20 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:13.596 18:10:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:13.855 18:10:20 keyring_file -- keyring/file.sh@112 -- # config='{ 00:23:13.855 "subsystems": [ 00:23:13.855 { 00:23:13.855 "subsystem": "keyring", 00:23:13.855 "config": [ 00:23:13.855 { 00:23:13.855 "method": "keyring_file_add_key", 00:23:13.855 "params": { 00:23:13.855 "name": "key0", 00:23:13.855 "path": "/tmp/tmp.PJlbQgbXLG" 00:23:13.855 } 00:23:13.855 }, 00:23:13.855 { 00:23:13.855 "method": "keyring_file_add_key", 00:23:13.855 "params": { 00:23:13.855 "name": "key1", 00:23:13.855 "path": "/tmp/tmp.VtwePWQPQF" 00:23:13.855 } 00:23:13.855 } 00:23:13.855 ] 00:23:13.855 }, 00:23:13.855 { 00:23:13.855 "subsystem": "iobuf", 00:23:13.855 "config": [ 00:23:13.855 { 00:23:13.855 "method": "iobuf_set_options", 00:23:13.855 "params": { 00:23:13.855 "large_bufsize": 135168, 00:23:13.855 "large_pool_count": 1024, 00:23:13.855 "small_bufsize": 8192, 00:23:13.855 "small_pool_count": 8192 00:23:13.855 } 00:23:13.855 } 00:23:13.855 ] 00:23:13.855 }, 00:23:13.855 { 00:23:13.855 "subsystem": "sock", 00:23:13.855 "config": [ 00:23:13.855 { 00:23:13.855 "method": "sock_set_default_impl", 00:23:13.855 "params": { 00:23:13.855 "impl_name": "posix" 00:23:13.855 } 00:23:13.855 }, 00:23:13.855 { 00:23:13.855 "method": "sock_impl_set_options", 00:23:13.855 "params": { 00:23:13.855 "enable_ktls": false, 00:23:13.855 "enable_placement_id": 0, 00:23:13.855 "enable_quickack": false, 00:23:13.855 "enable_recv_pipe": true, 00:23:13.856 "enable_zerocopy_send_client": false, 00:23:13.856 "enable_zerocopy_send_server": true, 00:23:13.856 "impl_name": "ssl", 00:23:13.856 "recv_buf_size": 4096, 00:23:13.856 "send_buf_size": 4096, 00:23:13.856 "tls_version": 0, 00:23:13.856 "zerocopy_threshold": 0 00:23:13.856 } 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "method": "sock_impl_set_options", 00:23:13.856 "params": { 00:23:13.856 "enable_ktls": false, 00:23:13.856 "enable_placement_id": 0, 00:23:13.856 "enable_quickack": false, 00:23:13.856 "enable_recv_pipe": true, 00:23:13.856 "enable_zerocopy_send_client": false, 00:23:13.856 "enable_zerocopy_send_server": true, 00:23:13.856 "impl_name": "posix", 00:23:13.856 "recv_buf_size": 2097152, 00:23:13.856 "send_buf_size": 2097152, 00:23:13.856 "tls_version": 0, 00:23:13.856 "zerocopy_threshold": 0 00:23:13.856 } 00:23:13.856 } 00:23:13.856 ] 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "subsystem": "vmd", 00:23:13.856 "config": [] 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "subsystem": "accel", 00:23:13.856 "config": [ 00:23:13.856 { 00:23:13.856 "method": "accel_set_options", 00:23:13.856 "params": { 00:23:13.856 "buf_count": 2048, 00:23:13.856 "large_cache_size": 16, 00:23:13.856 "sequence_count": 2048, 00:23:13.856 "small_cache_size": 128, 00:23:13.856 "task_count": 2048 00:23:13.856 } 00:23:13.856 } 00:23:13.856 ] 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "subsystem": "bdev", 00:23:13.856 "config": [ 00:23:13.856 { 00:23:13.856 "method": "bdev_set_options", 00:23:13.856 "params": { 00:23:13.856 "bdev_auto_examine": true, 00:23:13.856 "bdev_io_cache_size": 256, 00:23:13.856 "bdev_io_pool_size": 65535, 00:23:13.856 "iobuf_large_cache_size": 16, 00:23:13.856 "iobuf_small_cache_size": 128 00:23:13.856 } 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "method": "bdev_raid_set_options", 00:23:13.856 "params": { 00:23:13.856 "process_max_bandwidth_mb_sec": 0, 00:23:13.856 "process_window_size_kb": 1024 00:23:13.856 } 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "method": "bdev_iscsi_set_options", 00:23:13.856 "params": { 00:23:13.856 "timeout_sec": 30 00:23:13.856 } 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "method": "bdev_nvme_set_options", 00:23:13.856 "params": { 00:23:13.856 "action_on_timeout": "none", 00:23:13.856 "allow_accel_sequence": false, 00:23:13.856 "arbitration_burst": 0, 00:23:13.856 "bdev_retry_count": 3, 00:23:13.856 "ctrlr_loss_timeout_sec": 0, 00:23:13.856 "delay_cmd_submit": true, 00:23:13.856 "dhchap_dhgroups": [ 00:23:13.856 "null", 00:23:13.856 "ffdhe2048", 00:23:13.856 "ffdhe3072", 00:23:13.856 "ffdhe4096", 00:23:13.856 "ffdhe6144", 00:23:13.856 "ffdhe8192" 00:23:13.856 ], 00:23:13.856 "dhchap_digests": [ 00:23:13.856 "sha256", 00:23:13.856 "sha384", 00:23:13.856 "sha512" 00:23:13.856 ], 00:23:13.856 "disable_auto_failback": false, 00:23:13.856 "fast_io_fail_timeout_sec": 0, 00:23:13.856 "generate_uuids": false, 00:23:13.856 "high_priority_weight": 0, 00:23:13.856 "io_path_stat": false, 00:23:13.856 "io_queue_requests": 512, 00:23:13.856 "keep_alive_timeout_ms": 10000, 00:23:13.856 "low_priority_weight": 0, 00:23:13.856 "medium_priority_weight": 0, 00:23:13.856 "nvme_adminq_poll_period_us": 10000, 00:23:13.856 "nvme_error_stat": false, 00:23:13.856 "nvme_ioq_poll_period_us": 0, 00:23:13.856 "rdma_cm_event_timeout_ms": 0, 00:23:13.856 "rdma_max_cq_size": 0, 00:23:13.856 "rdma_srq_size": 0, 00:23:13.856 "reconnect_delay_sec": 0, 00:23:13.856 "timeout_admin_us": 0, 00:23:13.856 "timeout_us": 0, 00:23:13.856 "transport_ack_timeout": 0, 00:23:13.856 "transport_retry_count": 4, 00:23:13.856 "transport_tos": 0 00:23:13.856 } 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "method": "bdev_nvme_attach_controller", 00:23:13.856 "params": { 00:23:13.856 "adrfam": "IPv4", 00:23:13.856 "ctrlr_loss_timeout_sec": 0, 00:23:13.856 "ddgst": false, 00:23:13.856 "fast_io_fail_timeout_sec": 0, 00:23:13.856 "hdgst": false, 00:23:13.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:13.856 "name": "nvme0", 00:23:13.856 "prchk_guard": false, 00:23:13.856 "prchk_reftag": false, 00:23:13.856 "psk": "key0", 00:23:13.856 "reconnect_delay_sec": 0, 00:23:13.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.856 "traddr": "127.0.0.1", 00:23:13.856 "trsvcid": "4420", 00:23:13.856 "trtype": "TCP" 00:23:13.856 } 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "method": "bdev_nvme_set_hotplug", 00:23:13.856 "params": { 00:23:13.856 "enable": false, 00:23:13.856 "period_us": 100000 00:23:13.856 } 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "method": "bdev_wait_for_examine" 00:23:13.856 } 00:23:13.856 ] 00:23:13.856 }, 00:23:13.856 { 00:23:13.856 "subsystem": "nbd", 00:23:13.856 "config": [] 00:23:13.856 } 00:23:13.856 ] 00:23:13.856 }' 00:23:13.856 18:10:20 keyring_file -- keyring/file.sh@114 -- # killprocess 99443 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99443 ']' 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99443 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@955 -- # uname 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99443 00:23:13.856 killing process with pid 99443 00:23:13.856 Received shutdown signal, test time was about 1.000000 seconds 00:23:13.856 00:23:13.856 Latency(us) 00:23:13.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.856 =================================================================================================================== 00:23:13.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99443' 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@969 -- # kill 99443 00:23:13.856 18:10:20 keyring_file -- common/autotest_common.sh@974 -- # wait 99443 00:23:14.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:14.115 18:10:20 keyring_file -- keyring/file.sh@117 -- # bperfpid=99913 00:23:14.115 18:10:20 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:14.115 18:10:20 keyring_file -- keyring/file.sh@119 -- # waitforlisten 99913 /var/tmp/bperf.sock 00:23:14.115 18:10:20 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:23:14.115 "subsystems": [ 00:23:14.115 { 00:23:14.115 "subsystem": "keyring", 00:23:14.115 "config": [ 00:23:14.115 { 00:23:14.115 "method": "keyring_file_add_key", 00:23:14.115 "params": { 00:23:14.115 "name": "key0", 00:23:14.115 "path": "/tmp/tmp.PJlbQgbXLG" 00:23:14.115 } 00:23:14.115 }, 00:23:14.115 { 00:23:14.115 "method": "keyring_file_add_key", 00:23:14.115 "params": { 00:23:14.115 "name": "key1", 00:23:14.115 "path": "/tmp/tmp.VtwePWQPQF" 00:23:14.115 } 00:23:14.115 } 00:23:14.115 ] 00:23:14.115 }, 00:23:14.115 { 00:23:14.115 "subsystem": "iobuf", 00:23:14.115 "config": [ 00:23:14.115 { 00:23:14.115 "method": "iobuf_set_options", 00:23:14.115 "params": { 00:23:14.115 "large_bufsize": 135168, 00:23:14.115 "large_pool_count": 1024, 00:23:14.115 "small_bufsize": 8192, 00:23:14.115 "small_pool_count": 8192 00:23:14.115 } 00:23:14.115 } 00:23:14.115 ] 00:23:14.115 }, 00:23:14.115 { 00:23:14.115 "subsystem": "sock", 00:23:14.115 "config": [ 00:23:14.115 { 00:23:14.115 "method": "sock_set_default_impl", 00:23:14.115 "params": { 00:23:14.115 "impl_name": "posix" 00:23:14.115 } 00:23:14.115 }, 00:23:14.115 { 00:23:14.115 "method": "sock_impl_set_options", 00:23:14.115 "params": { 00:23:14.115 "enable_ktls": false, 00:23:14.115 "enable_placement_id": 0, 00:23:14.115 "enable_quickack": false, 00:23:14.115 "enable_recv_pipe": true, 00:23:14.115 "enable_zerocopy_send_client": false, 00:23:14.115 "enable_zerocopy_send_server": true, 00:23:14.115 "impl_name": "ssl", 00:23:14.115 "recv_buf_size": 4096, 00:23:14.115 "send_buf_size": 4096, 00:23:14.115 "tls_version": 0, 00:23:14.115 "zerocopy_threshold": 0 00:23:14.115 } 00:23:14.115 }, 00:23:14.115 { 00:23:14.115 "method": "sock_impl_set_options", 00:23:14.115 "params": { 00:23:14.115 "enable_ktls": false, 00:23:14.115 "enable_placement_id": 0, 00:23:14.115 "enable_quickack": false, 00:23:14.115 "enable_recv_pipe": true, 00:23:14.115 "enable_zerocopy_send_client": false, 00:23:14.115 "enable_zerocopy_send_server": true, 00:23:14.115 "impl_name": "posix", 00:23:14.115 "recv_buf_size": 2097152, 00:23:14.115 "send_buf_size": 2097152, 00:23:14.115 "tls_version": 0, 00:23:14.115 "zerocopy_threshold": 0 00:23:14.115 } 00:23:14.115 } 00:23:14.115 ] 00:23:14.115 }, 00:23:14.115 { 00:23:14.115 "subsystem": "vmd", 00:23:14.115 "config": [] 00:23:14.115 }, 00:23:14.115 { 00:23:14.115 "subsystem": "accel", 00:23:14.115 "config": [ 00:23:14.115 { 00:23:14.115 "method": "accel_set_options", 00:23:14.115 "params": { 00:23:14.115 "buf_count": 2048, 00:23:14.115 "large_cache_size": 16, 00:23:14.115 "sequence_count": 2048, 00:23:14.115 "small_cache_size": 128, 00:23:14.115 "task_count": 2048 00:23:14.115 } 00:23:14.115 } 00:23:14.115 ] 00:23:14.115 }, 00:23:14.115 { 00:23:14.115 "subsystem": "bdev", 00:23:14.115 "config": [ 00:23:14.115 { 00:23:14.115 "method": "bdev_set_options", 00:23:14.115 "params": { 00:23:14.116 "bdev_auto_examine": true, 00:23:14.116 "bdev_io_cache_size": 256, 00:23:14.116 "bdev_io_pool_size": 65535, 00:23:14.116 "iobuf_large_cache_size": 16, 00:23:14.116 "iobuf_small_cache_size": 128 00:23:14.116 } 00:23:14.116 }, 00:23:14.116 { 00:23:14.116 "method": "bdev_raid_set_options", 00:23:14.116 "params": { 00:23:14.116 "process_max_bandwidth_mb_sec": 0, 00:23:14.116 "process_window_size_kb": 1024 00:23:14.116 } 00:23:14.116 }, 00:23:14.116 { 00:23:14.116 "method": "bdev_iscsi_set_options", 00:23:14.116 "params": { 00:23:14.116 "timeout_sec": 30 00:23:14.116 } 00:23:14.116 }, 00:23:14.116 { 00:23:14.116 "method": "bdev_nvme_set_options", 00:23:14.116 "params": { 00:23:14.116 "action_on_timeout": "none", 00:23:14.116 "allow_accel_sequence": false, 00:23:14.116 "arbitration_burst": 0, 00:23:14.116 "bdev_retry_count": 3, 00:23:14.116 "ctrlr_loss_timeout_sec": 0, 00:23:14.116 "delay_cmd_submit": true, 00:23:14.116 "dhchap_dhgroups": [ 00:23:14.116 "null", 00:23:14.116 "ffdhe2048", 00:23:14.116 "ffdhe3072", 00:23:14.116 "ffdhe4096", 00:23:14.116 "ffdhe6144", 00:23:14.116 "ffdhe8192" 00:23:14.116 ], 00:23:14.116 "dhchap_digests": [ 00:23:14.116 "sha256", 00:23:14.116 "sha384", 00:23:14.116 "sha512" 00:23:14.116 ], 00:23:14.116 "disable_auto_failback": false, 00:23:14.116 "fast_io_fail_timeout_sec": 0, 00:23:14.116 "generate_uuids": false, 00:23:14.116 "high_priority_weight": 0, 00:23:14.116 "io_path_stat": false, 00:23:14.116 "io_queue_requests": 512, 00:23:14.116 "keep_alive_timeout_ms": 10000, 00:23:14.116 "low_priority_weight": 0, 00:23:14.116 "medium_priority_weight": 0, 00:23:14.116 "nvme_adminq_poll_period_us": 10000, 00:23:14.116 "nvme_error_stat": false, 00:23:14.116 "nvme_ioq_poll_period_us": 0, 00:23:14.116 "rdma_cm_event_timeout_ms": 0, 00:23:14.116 "rdma_max_cq_size": 0, 00:23:14.116 "rdma_srq_size": 0, 00:23:14.116 "reconnect_delay_sec": 0, 00:23:14.116 "timeout_admin_us": 0, 00:23:14.116 "timeout_us": 0, 00:23:14.116 "transport_ack_timeout": 0, 00:23:14.116 "transport_retry_count": 4, 00:23:14.116 "transport_tos": 0 00:23:14.116 } 00:23:14.116 }, 00:23:14.116 { 00:23:14.116 "method": "bdev_nvme_attach_controller", 00:23:14.116 "params": { 00:23:14.116 "adrfam": "IPv4", 00:23:14.116 "ctrlr_loss_timeout_sec": 0, 00:23:14.116 "ddgst": false, 00:23:14.116 "fast_io_fail_timeout_sec": 0, 00:23:14.116 "hdgst": false, 00:23:14.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:14.116 "name": "nvme0", 00:23:14.116 "prchk_guard": false, 00:23:14.116 "prchk_reftag": false, 00:23:14.116 "psk": "key0", 00:23:14.116 "reconnect_delay_sec": 0, 00:23:14.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.116 "traddr": "127.0.0.1", 00:23:14.116 "trsvcid": "4420", 00:23:14.116 "trtype": "TCP" 00:23:14.116 } 00:23:14.116 }, 00:23:14.116 { 00:23:14.116 "method": "bdev_nvme_set_hotplug", 00:23:14.116 "params": { 00:23:14.116 "enable": false, 00:23:14.116 "period_us": 100000 00:23:14.116 } 00:23:14.116 }, 00:23:14.116 { 00:23:14.116 "method": "bdev_wait_for_examine" 00:23:14.116 } 00:23:14.116 ] 00:23:14.116 }, 00:23:14.116 { 00:23:14.116 "subsystem": "nbd", 00:23:14.116 "config": [] 00:23:14.116 } 00:23:14.116 ] 00:23:14.116 }' 00:23:14.116 18:10:20 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99913 ']' 00:23:14.116 18:10:20 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:14.116 18:10:20 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.116 18:10:20 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:14.116 18:10:20 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.116 18:10:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:14.116 [2024-07-24 18:10:20.976451] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:23:14.116 [2024-07-24 18:10:20.976572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99913 ] 00:23:14.374 [2024-07-24 18:10:21.115980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.374 [2024-07-24 18:10:21.225073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.632 [2024-07-24 18:10:21.393975] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.197 18:10:21 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.197 18:10:21 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:23:15.197 18:10:21 keyring_file -- keyring/file.sh@120 -- # jq length 00:23:15.197 18:10:21 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:15.197 18:10:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.197 18:10:22 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:15.197 18:10:22 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:23:15.197 18:10:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:15.197 18:10:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:15.197 18:10:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:15.197 18:10:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:15.197 18:10:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.455 18:10:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:15.455 18:10:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:23:15.455 18:10:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:15.455 18:10:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:15.455 18:10:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:15.455 18:10:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.455 18:10:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:15.712 18:10:22 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:15.712 18:10:22 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:15.712 18:10:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:15.712 18:10:22 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:15.970 18:10:22 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:15.970 18:10:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:15.970 18:10:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PJlbQgbXLG /tmp/tmp.VtwePWQPQF 00:23:15.970 18:10:22 keyring_file -- keyring/file.sh@20 -- # killprocess 99913 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99913 ']' 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99913 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99913 00:23:15.970 killing process with pid 99913 00:23:15.970 Received shutdown signal, test time was about 1.000000 seconds 00:23:15.970 00:23:15.970 Latency(us) 00:23:15.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.970 =================================================================================================================== 00:23:15.970 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99913' 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@969 -- # kill 99913 00:23:15.970 18:10:22 keyring_file -- common/autotest_common.sh@974 -- # wait 99913 00:23:16.228 18:10:23 keyring_file -- keyring/file.sh@21 -- # killprocess 99412 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99412 ']' 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99412 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@955 -- # uname 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99412 00:23:16.228 killing process with pid 99412 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99412' 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@969 -- # kill 99412 00:23:16.228 [2024-07-24 18:10:23.101274] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:16.228 18:10:23 keyring_file -- common/autotest_common.sh@974 -- # wait 99412 00:23:16.518 00:23:16.518 real 0m15.581s 00:23:16.518 user 0m38.041s 00:23:16.518 sys 0m3.711s 00:23:16.518 18:10:23 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.518 18:10:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:16.518 ************************************ 00:23:16.518 END TEST keyring_file 00:23:16.518 ************************************ 00:23:16.518 18:10:23 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:23:16.518 18:10:23 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:16.518 18:10:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:16.518 18:10:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.518 18:10:23 -- common/autotest_common.sh@10 -- # set +x 00:23:16.779 ************************************ 00:23:16.779 START TEST keyring_linux 00:23:16.779 ************************************ 00:23:16.779 18:10:23 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:16.779 * Looking for test storage... 00:23:16.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=dd5b4d38-cb18-43dc-996b-2a3d0b1391ee 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:16.779 18:10:23 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.779 18:10:23 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.779 18:10:23 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.779 18:10:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.779 18:10:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.779 18:10:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.779 18:10:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:16.779 18:10:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:16.779 /tmp/:spdk-test:key0 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:16.779 18:10:23 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:16.779 /tmp/:spdk-test:key1 00:23:16.779 18:10:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100065 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:16.779 18:10:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100065 00:23:16.779 18:10:23 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100065 ']' 00:23:16.779 18:10:23 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.779 18:10:23 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.779 18:10:23 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.779 18:10:23 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.779 18:10:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:17.037 [2024-07-24 18:10:23.786104] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:23:17.037 [2024-07-24 18:10:23.786232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100065 ] 00:23:17.037 [2024-07-24 18:10:23.930098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.294 [2024-07-24 18:10:24.048721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.860 18:10:24 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.860 18:10:24 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:23:17.860 18:10:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:17.860 18:10:24 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.860 18:10:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:17.860 [2024-07-24 18:10:24.792614] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.860 null0 00:23:17.860 [2024-07-24 18:10:24.824591] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.860 [2024-07-24 18:10:24.824847] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:18.149 18:10:24 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.149 18:10:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:18.149 586707376 00:23:18.149 18:10:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:18.149 276945303 00:23:18.149 18:10:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100101 00:23:18.149 18:10:24 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:18.149 18:10:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100101 /var/tmp/bperf.sock 00:23:18.149 18:10:24 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100101 ']' 00:23:18.149 18:10:24 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:18.149 18:10:24 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:18.149 18:10:24 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:18.149 18:10:24 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.149 18:10:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:18.149 [2024-07-24 18:10:24.902203] Starting SPDK v24.09-pre git sha1 03a38592a / DPDK 24.03.0 initialization... 00:23:18.149 [2024-07-24 18:10:24.902322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100101 ] 00:23:18.149 [2024-07-24 18:10:25.039611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.412 [2024-07-24 18:10:25.157295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.979 18:10:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.979 18:10:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:23:18.979 18:10:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:18.979 18:10:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:19.237 18:10:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:19.237 18:10:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:19.495 18:10:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:19.495 18:10:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:19.754 [2024-07-24 18:10:26.626152] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.754 nvme0n1 00:23:19.754 18:10:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:19.754 18:10:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:19.754 18:10:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:19.754 18:10:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:19.754 18:10:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:19.754 18:10:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:20.323 18:10:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:20.323 18:10:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:20.323 18:10:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@25 -- # sn=586707376 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 586707376 == \5\8\6\7\0\7\3\7\6 ]] 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 586707376 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:20.323 18:10:27 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:20.581 Running I/O for 1 seconds... 00:23:21.526 00:23:21.526 Latency(us) 00:23:21.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.526 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:21.526 nvme0n1 : 1.01 15055.33 58.81 0.00 0.00 8460.13 2761.87 10236.10 00:23:21.526 =================================================================================================================== 00:23:21.526 Total : 15055.33 58.81 0.00 0.00 8460.13 2761.87 10236.10 00:23:21.526 0 00:23:21.526 18:10:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:21.526 18:10:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:21.784 18:10:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:21.784 18:10:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:21.784 18:10:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:21.784 18:10:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:21.784 18:10:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:21.784 18:10:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:22.042 18:10:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:22.042 18:10:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:22.042 18:10:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:22.042 18:10:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.042 18:10:28 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:23:22.042 18:10:28 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.042 18:10:28 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:22.042 18:10:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.042 18:10:28 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:22.042 18:10:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.042 18:10:28 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.042 18:10:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.323 [2024-07-24 18:10:29.277503] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:22.324 [2024-07-24 18:10:29.278124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142ea0 (107): Transport endpoint is not connected 00:23:22.324 [2024-07-24 18:10:29.279110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142ea0 (9): Bad file descriptor 00:23:22.324 [2024-07-24 18:10:29.280107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:22.324 [2024-07-24 18:10:29.280130] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:22.324 [2024-07-24 18:10:29.280140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:22.324 2024/07/24 18:10:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:22.581 request: 00:23:22.581 { 00:23:22.581 "method": "bdev_nvme_attach_controller", 00:23:22.581 "params": { 00:23:22.581 "name": "nvme0", 00:23:22.581 "trtype": "tcp", 00:23:22.581 "traddr": "127.0.0.1", 00:23:22.581 "adrfam": "ipv4", 00:23:22.581 "trsvcid": "4420", 00:23:22.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:22.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:22.581 "prchk_reftag": false, 00:23:22.581 "prchk_guard": false, 00:23:22.581 "hdgst": false, 00:23:22.581 "ddgst": false, 00:23:22.581 "psk": ":spdk-test:key1" 00:23:22.581 } 00:23:22.581 } 00:23:22.581 Got JSON-RPC error response 00:23:22.581 GoRPCClient: error on JSON-RPC call 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@33 -- # sn=586707376 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 586707376 00:23:22.581 1 links removed 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@33 -- # sn=276945303 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 276945303 00:23:22.581 1 links removed 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100101 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100101 ']' 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100101 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100101 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:22.581 killing process with pid 100101 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100101' 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@969 -- # kill 100101 00:23:22.581 Received shutdown signal, test time was about 1.000000 seconds 00:23:22.581 00:23:22.581 Latency(us) 00:23:22.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.581 =================================================================================================================== 00:23:22.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@974 -- # wait 100101 00:23:22.581 18:10:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100065 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100065 ']' 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100065 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.581 18:10:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100065 00:23:22.841 18:10:29 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.841 18:10:29 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.841 killing process with pid 100065 00:23:22.841 18:10:29 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100065' 00:23:22.841 18:10:29 keyring_linux -- common/autotest_common.sh@969 -- # kill 100065 00:23:22.841 18:10:29 keyring_linux -- common/autotest_common.sh@974 -- # wait 100065 00:23:23.100 00:23:23.100 real 0m6.408s 00:23:23.100 user 0m12.282s 00:23:23.100 sys 0m1.770s 00:23:23.100 18:10:29 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:23.100 18:10:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:23.100 ************************************ 00:23:23.100 END TEST keyring_linux 00:23:23.100 ************************************ 00:23:23.100 18:10:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:23:23.100 18:10:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:23:23.100 18:10:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:23:23.100 18:10:29 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:23:23.100 18:10:29 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:23:23.100 18:10:29 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:23:23.100 18:10:29 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:23:23.100 18:10:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.100 18:10:29 -- common/autotest_common.sh@10 -- # set +x 00:23:23.100 18:10:29 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:23:23.100 18:10:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:23:23.100 18:10:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:23:23.100 18:10:29 -- common/autotest_common.sh@10 -- # set +x 00:23:25.039 INFO: APP EXITING 00:23:25.039 INFO: killing all VMs 00:23:25.039 INFO: killing vhost app 00:23:25.039 INFO: EXIT DONE 00:23:25.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:25.366 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:25.366 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:26.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:26.301 Cleaning 00:23:26.301 Removing: /var/run/dpdk/spdk0/config 00:23:26.301 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:26.301 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:26.301 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:26.301 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:26.301 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:26.301 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:26.301 Removing: /var/run/dpdk/spdk1/config 00:23:26.301 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:26.301 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:26.301 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:26.301 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:26.301 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:26.301 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:26.301 Removing: /var/run/dpdk/spdk2/config 00:23:26.301 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:26.301 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:26.301 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:26.301 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:26.301 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:26.301 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:26.301 Removing: /var/run/dpdk/spdk3/config 00:23:26.301 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:26.301 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:26.301 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:26.301 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:26.301 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:26.301 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:26.301 Removing: /var/run/dpdk/spdk4/config 00:23:26.301 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:26.301 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:26.301 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:26.301 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:26.301 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:26.301 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:26.301 Removing: /dev/shm/nvmf_trace.0 00:23:26.301 Removing: /dev/shm/spdk_tgt_trace.pid60631 00:23:26.301 Removing: /var/run/dpdk/spdk0 00:23:26.301 Removing: /var/run/dpdk/spdk1 00:23:26.301 Removing: /var/run/dpdk/spdk2 00:23:26.301 Removing: /var/run/dpdk/spdk3 00:23:26.301 Removing: /var/run/dpdk/spdk4 00:23:26.301 Removing: /var/run/dpdk/spdk_pid100065 00:23:26.301 Removing: /var/run/dpdk/spdk_pid100101 00:23:26.301 Removing: /var/run/dpdk/spdk_pid60486 00:23:26.301 Removing: /var/run/dpdk/spdk_pid60631 00:23:26.301 Removing: /var/run/dpdk/spdk_pid60893 00:23:26.301 Removing: /var/run/dpdk/spdk_pid60986 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61020 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61135 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61166 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61284 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61560 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61736 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61818 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61906 00:23:26.301 Removing: /var/run/dpdk/spdk_pid61995 00:23:26.301 Removing: /var/run/dpdk/spdk_pid62034 00:23:26.301 Removing: /var/run/dpdk/spdk_pid62069 00:23:26.301 Removing: /var/run/dpdk/spdk_pid62131 00:23:26.301 Removing: /var/run/dpdk/spdk_pid62237 00:23:26.301 Removing: /var/run/dpdk/spdk_pid62864 00:23:26.301 Removing: /var/run/dpdk/spdk_pid62928 00:23:26.301 Removing: /var/run/dpdk/spdk_pid62997 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63025 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63104 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63132 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63212 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63239 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63286 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63316 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63368 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63398 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63545 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63581 00:23:26.301 Removing: /var/run/dpdk/spdk_pid63655 00:23:26.301 Removing: /var/run/dpdk/spdk_pid64087 00:23:26.301 Removing: /var/run/dpdk/spdk_pid64416 00:23:26.301 Removing: /var/run/dpdk/spdk_pid66835 00:23:26.301 Removing: /var/run/dpdk/spdk_pid66881 00:23:26.301 Removing: /var/run/dpdk/spdk_pid67188 00:23:26.301 Removing: /var/run/dpdk/spdk_pid67234 00:23:26.301 Removing: /var/run/dpdk/spdk_pid67591 00:23:26.301 Removing: /var/run/dpdk/spdk_pid68123 00:23:26.301 Removing: /var/run/dpdk/spdk_pid68571 00:23:26.301 Removing: /var/run/dpdk/spdk_pid69538 00:23:26.301 Removing: /var/run/dpdk/spdk_pid70509 00:23:26.301 Removing: /var/run/dpdk/spdk_pid70631 00:23:26.301 Removing: /var/run/dpdk/spdk_pid70697 00:23:26.301 Removing: /var/run/dpdk/spdk_pid72149 00:23:26.301 Removing: /var/run/dpdk/spdk_pid72436 00:23:26.301 Removing: /var/run/dpdk/spdk_pid75757 00:23:26.301 Removing: /var/run/dpdk/spdk_pid76130 00:23:26.301 Removing: /var/run/dpdk/spdk_pid76727 00:23:26.557 Removing: /var/run/dpdk/spdk_pid77136 00:23:26.557 Removing: /var/run/dpdk/spdk_pid82428 00:23:26.557 Removing: /var/run/dpdk/spdk_pid82877 00:23:26.557 Removing: /var/run/dpdk/spdk_pid82985 00:23:26.557 Removing: /var/run/dpdk/spdk_pid83133 00:23:26.557 Removing: /var/run/dpdk/spdk_pid83180 00:23:26.557 Removing: /var/run/dpdk/spdk_pid83220 00:23:26.557 Removing: /var/run/dpdk/spdk_pid83271 00:23:26.557 Removing: /var/run/dpdk/spdk_pid83430 00:23:26.557 Removing: /var/run/dpdk/spdk_pid83579 00:23:26.557 Removing: /var/run/dpdk/spdk_pid83834 00:23:26.557 Removing: /var/run/dpdk/spdk_pid83945 00:23:26.557 Removing: /var/run/dpdk/spdk_pid84193 00:23:26.557 Removing: /var/run/dpdk/spdk_pid84320 00:23:26.557 Removing: /var/run/dpdk/spdk_pid84441 00:23:26.557 Removing: /var/run/dpdk/spdk_pid84780 00:23:26.557 Removing: /var/run/dpdk/spdk_pid85229 00:23:26.557 Removing: /var/run/dpdk/spdk_pid85535 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86018 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86025 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86366 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86382 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86402 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86427 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86439 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86795 00:23:26.557 Removing: /var/run/dpdk/spdk_pid86840 00:23:26.557 Removing: /var/run/dpdk/spdk_pid87186 00:23:26.557 Removing: /var/run/dpdk/spdk_pid87442 00:23:26.557 Removing: /var/run/dpdk/spdk_pid87935 00:23:26.557 Removing: /var/run/dpdk/spdk_pid88516 00:23:26.557 Removing: /var/run/dpdk/spdk_pid89887 00:23:26.557 Removing: /var/run/dpdk/spdk_pid90487 00:23:26.557 Removing: /var/run/dpdk/spdk_pid90489 00:23:26.557 Removing: /var/run/dpdk/spdk_pid92429 00:23:26.557 Removing: /var/run/dpdk/spdk_pid92506 00:23:26.557 Removing: /var/run/dpdk/spdk_pid92596 00:23:26.557 Removing: /var/run/dpdk/spdk_pid92692 00:23:26.557 Removing: /var/run/dpdk/spdk_pid92854 00:23:26.557 Removing: /var/run/dpdk/spdk_pid92940 00:23:26.557 Removing: /var/run/dpdk/spdk_pid93036 00:23:26.557 Removing: /var/run/dpdk/spdk_pid93126 00:23:26.557 Removing: /var/run/dpdk/spdk_pid93464 00:23:26.557 Removing: /var/run/dpdk/spdk_pid94159 00:23:26.557 Removing: /var/run/dpdk/spdk_pid95518 00:23:26.557 Removing: /var/run/dpdk/spdk_pid95723 00:23:26.557 Removing: /var/run/dpdk/spdk_pid96017 00:23:26.557 Removing: /var/run/dpdk/spdk_pid96317 00:23:26.557 Removing: /var/run/dpdk/spdk_pid96866 00:23:26.557 Removing: /var/run/dpdk/spdk_pid96877 00:23:26.557 Removing: /var/run/dpdk/spdk_pid97232 00:23:26.557 Removing: /var/run/dpdk/spdk_pid97394 00:23:26.557 Removing: /var/run/dpdk/spdk_pid97551 00:23:26.557 Removing: /var/run/dpdk/spdk_pid97648 00:23:26.557 Removing: /var/run/dpdk/spdk_pid97804 00:23:26.557 Removing: /var/run/dpdk/spdk_pid97913 00:23:26.557 Removing: /var/run/dpdk/spdk_pid98592 00:23:26.557 Removing: /var/run/dpdk/spdk_pid98626 00:23:26.557 Removing: /var/run/dpdk/spdk_pid98657 00:23:26.557 Removing: /var/run/dpdk/spdk_pid98915 00:23:26.557 Removing: /var/run/dpdk/spdk_pid98947 00:23:26.557 Removing: /var/run/dpdk/spdk_pid98981 00:23:26.557 Removing: /var/run/dpdk/spdk_pid99412 00:23:26.557 Removing: /var/run/dpdk/spdk_pid99443 00:23:26.557 Removing: /var/run/dpdk/spdk_pid99913 00:23:26.557 Clean 00:23:26.557 18:10:33 -- common/autotest_common.sh@1451 -- # return 0 00:23:26.557 18:10:33 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:23:26.557 18:10:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.557 18:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:26.815 18:10:33 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:23:26.815 18:10:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.815 18:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:26.815 18:10:33 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:26.815 18:10:33 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:26.815 18:10:33 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:26.815 18:10:33 -- spdk/autotest.sh@395 -- # hash lcov 00:23:26.815 18:10:33 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:26.815 18:10:33 -- spdk/autotest.sh@397 -- # hostname 00:23:26.815 18:10:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:27.073 geninfo: WARNING: invalid characters removed from testname! 00:23:53.716 18:10:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:56.998 18:11:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:59.537 18:11:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:02.078 18:11:09 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:05.404 18:11:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:07.302 18:11:14 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:09.834 18:11:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:09.834 18:11:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:09.834 18:11:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:09.834 18:11:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.834 18:11:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.834 18:11:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.834 18:11:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.834 18:11:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.834 18:11:16 -- paths/export.sh@5 -- $ export PATH 00:24:09.834 18:11:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.834 18:11:16 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:09.834 18:11:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:24:09.834 18:11:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721844676.XXXXXX 00:24:09.834 18:11:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721844676.EV2isA 00:24:09.834 18:11:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:24:09.834 18:11:16 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:24:09.834 18:11:16 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:09.834 18:11:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:09.834 18:11:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:09.834 18:11:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:24:09.834 18:11:16 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:24:09.834 18:11:16 -- common/autotest_common.sh@10 -- $ set +x 00:24:09.834 18:11:16 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:24:09.834 18:11:16 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:24:09.834 18:11:16 -- pm/common@17 -- $ local monitor 00:24:09.834 18:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:09.834 18:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:09.834 18:11:16 -- pm/common@25 -- $ sleep 1 00:24:09.834 18:11:16 -- pm/common@21 -- $ date +%s 00:24:09.834 18:11:16 -- pm/common@21 -- $ date +%s 00:24:09.834 18:11:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721844676 00:24:09.834 18:11:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721844676 00:24:09.834 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721844676_collect-vmstat.pm.log 00:24:09.834 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721844676_collect-cpu-load.pm.log 00:24:10.769 18:11:17 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:24:10.769 18:11:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:24:10.769 18:11:17 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:10.769 18:11:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:10.769 18:11:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:10.769 18:11:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:10.769 18:11:17 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:10.769 18:11:17 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:10.769 18:11:17 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:10.769 18:11:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:10.769 18:11:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:10.769 18:11:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:10.769 18:11:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:10.769 18:11:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:10.769 18:11:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:10.769 18:11:17 -- pm/common@44 -- $ pid=101791 00:24:10.769 18:11:17 -- pm/common@50 -- $ kill -TERM 101791 00:24:10.769 18:11:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:10.769 18:11:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:10.769 18:11:17 -- pm/common@44 -- $ pid=101793 00:24:10.769 18:11:17 -- pm/common@50 -- $ kill -TERM 101793 00:24:10.769 + [[ -n 5155 ]] 00:24:10.769 + sudo kill 5155 00:24:10.778 [Pipeline] } 00:24:10.796 [Pipeline] // timeout 00:24:10.803 [Pipeline] } 00:24:10.821 [Pipeline] // stage 00:24:10.828 [Pipeline] } 00:24:10.847 [Pipeline] // catchError 00:24:10.858 [Pipeline] stage 00:24:10.861 [Pipeline] { (Stop VM) 00:24:10.876 [Pipeline] sh 00:24:11.153 + vagrant halt 00:24:15.344 ==> default: Halting domain... 00:24:21.909 [Pipeline] sh 00:24:22.186 + vagrant destroy -f 00:24:26.470 ==> default: Removing domain... 00:24:26.483 [Pipeline] sh 00:24:26.773 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:24:26.781 [Pipeline] } 00:24:26.798 [Pipeline] // stage 00:24:26.803 [Pipeline] } 00:24:26.821 [Pipeline] // dir 00:24:26.828 [Pipeline] } 00:24:26.844 [Pipeline] // wrap 00:24:26.850 [Pipeline] } 00:24:26.865 [Pipeline] // catchError 00:24:26.875 [Pipeline] stage 00:24:26.878 [Pipeline] { (Epilogue) 00:24:26.894 [Pipeline] sh 00:24:27.177 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:33.853 [Pipeline] catchError 00:24:33.856 [Pipeline] { 00:24:33.871 [Pipeline] sh 00:24:34.212 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:34.469 Artifacts sizes are good 00:24:34.478 [Pipeline] } 00:24:34.496 [Pipeline] // catchError 00:24:34.508 [Pipeline] archiveArtifacts 00:24:34.514 Archiving artifacts 00:24:34.693 [Pipeline] cleanWs 00:24:34.704 [WS-CLEANUP] Deleting project workspace... 00:24:34.704 [WS-CLEANUP] Deferred wipeout is used... 00:24:34.711 [WS-CLEANUP] done 00:24:34.713 [Pipeline] } 00:24:34.732 [Pipeline] // stage 00:24:34.738 [Pipeline] } 00:24:34.755 [Pipeline] // node 00:24:34.761 [Pipeline] End of Pipeline 00:24:34.805 Finished: SUCCESS